00:00:00.001 Started by upstream project "autotest-per-patch" build number 124200 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.110 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.124 Fetching changes from the remote Git repository 00:00:00.127 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.153 > git --version # 'git version 2.39.2' 00:00:00.153 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.166 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.166 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.435 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.449 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.463 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:10.463 > git config core.sparsecheckout # timeout=10 00:00:10.478 > git read-tree -mu HEAD # timeout=10 00:00:10.498 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:10.523 Commit message: "pool: fixes for VisualBuild class" 00:00:10.523 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:10.634 [Pipeline] Start of Pipeline 00:00:10.649 [Pipeline] library 00:00:10.652 Loading library shm_lib@master 00:00:10.652 Library shm_lib@master is cached. Copying from home. 00:00:10.670 [Pipeline] node 00:00:10.682 Running on CYP13 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:10.684 [Pipeline] { 00:00:10.694 [Pipeline] catchError 00:00:10.695 [Pipeline] { 00:00:10.706 [Pipeline] wrap 00:00:10.714 [Pipeline] { 00:00:10.719 [Pipeline] stage 00:00:10.721 [Pipeline] { (Prologue) 00:00:10.910 [Pipeline] sh 00:00:11.198 + logger -p user.info -t JENKINS-CI 00:00:11.218 [Pipeline] echo 00:00:11.220 Node: CYP13 00:00:11.229 [Pipeline] sh 00:00:11.535 [Pipeline] setCustomBuildProperty 00:00:11.547 [Pipeline] echo 00:00:11.548 Cleanup processes 00:00:11.555 [Pipeline] sh 00:00:11.842 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:11.842 3281291 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:11.859 [Pipeline] sh 00:00:12.187 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:12.187 ++ grep -v 'sudo pgrep' 00:00:12.187 ++ awk '{print $1}' 00:00:12.187 + sudo kill -9 00:00:12.187 + true 00:00:12.203 [Pipeline] cleanWs 00:00:12.213 [WS-CLEANUP] Deleting project workspace... 00:00:12.213 [WS-CLEANUP] Deferred wipeout is used... 00:00:12.221 [WS-CLEANUP] done 00:00:12.224 [Pipeline] setCustomBuildProperty 00:00:12.238 [Pipeline] sh 00:00:12.523 + sudo git config --global --replace-all safe.directory '*' 00:00:12.601 [Pipeline] nodesByLabel 00:00:12.603 Found a total of 2 nodes with the 'sorcerer' label 00:00:12.612 [Pipeline] httpRequest 00:00:12.616 HttpMethod: GET 00:00:12.617 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:12.622 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:12.627 Response Code: HTTP/1.1 200 OK 00:00:12.627 Success: Status code 200 is in the accepted range: 200,404 00:00:12.628 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:13.734 [Pipeline] sh 00:00:14.066 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:14.083 [Pipeline] httpRequest 00:00:14.089 HttpMethod: GET 00:00:14.089 URL: http://10.211.164.101/packages/spdk_ee2eae53a9bd1d3096e31af60895b50305a10a5f.tar.gz 00:00:14.093 Sending request to url: http://10.211.164.101/packages/spdk_ee2eae53a9bd1d3096e31af60895b50305a10a5f.tar.gz 00:00:14.119 Response Code: HTTP/1.1 200 OK 00:00:14.119 Success: Status code 200 is in the accepted range: 200,404 00:00:14.120 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_ee2eae53a9bd1d3096e31af60895b50305a10a5f.tar.gz 00:02:38.373 [Pipeline] sh 00:02:38.663 + tar --no-same-owner -xf spdk_ee2eae53a9bd1d3096e31af60895b50305a10a5f.tar.gz 00:02:41.977 [Pipeline] sh 00:02:42.262 + git -C spdk log --oneline -n5 00:02:42.262 ee2eae53a dif: Match enum spdk_dif_pi_format with NVMe spec 00:02:42.262 a3f6419f1 app/nvme_identify: Add NVM Identify Namespace Data for ELBA Format 00:02:42.262 3b7525570 nvme: Get PI format for Extended LBA format 00:02:42.263 1e8a0c991 nvme: Get NVM Identify Namespace Data for Extended LBA Format 00:02:42.263 493b11851 nvme: Use Host Behavior Support Feature to enable LBA Format Extension 00:02:42.274 [Pipeline] } 00:02:42.291 [Pipeline] // stage 00:02:42.300 [Pipeline] stage 00:02:42.301 [Pipeline] { (Prepare) 00:02:42.319 [Pipeline] writeFile 00:02:42.336 [Pipeline] sh 00:02:42.620 + logger -p user.info -t JENKINS-CI 00:02:42.633 [Pipeline] sh 00:02:42.919 + logger -p user.info -t JENKINS-CI 00:02:42.933 [Pipeline] sh 00:02:43.218 + cat autorun-spdk.conf 00:02:43.218 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.218 SPDK_TEST_NVMF=1 00:02:43.218 SPDK_TEST_NVME_CLI=1 00:02:43.218 SPDK_TEST_NVMF_NICS=mlx5 00:02:43.218 SPDK_RUN_UBSAN=1 00:02:43.218 NET_TYPE=phy 00:02:43.226 RUN_NIGHTLY=0 00:02:43.230 [Pipeline] readFile 00:02:43.254 [Pipeline] withEnv 00:02:43.256 [Pipeline] { 00:02:43.319 [Pipeline] sh 00:02:43.601 + set -ex 00:02:43.601 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:02:43.601 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:43.601 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.601 ++ SPDK_TEST_NVMF=1 00:02:43.601 ++ SPDK_TEST_NVME_CLI=1 00:02:43.601 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:43.601 ++ SPDK_RUN_UBSAN=1 00:02:43.601 ++ NET_TYPE=phy 00:02:43.601 ++ RUN_NIGHTLY=0 00:02:43.601 + case $SPDK_TEST_NVMF_NICS in 00:02:43.601 + DRIVERS=mlx5_ib 00:02:43.601 + [[ -n mlx5_ib ]] 00:02:43.601 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:43.601 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:43.601 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:43.601 rmmod: ERROR: Module irdma is not currently loaded 00:02:43.601 rmmod: ERROR: Module i40iw is not currently loaded 00:02:43.601 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:43.601 + true 00:02:43.601 + for D in $DRIVERS 00:02:43.601 + sudo modprobe mlx5_ib 00:02:43.862 + exit 0 00:02:43.873 [Pipeline] } 00:02:43.891 [Pipeline] // withEnv 00:02:43.896 [Pipeline] } 00:02:43.913 [Pipeline] // stage 00:02:43.921 [Pipeline] catchError 00:02:43.922 [Pipeline] { 00:02:43.932 [Pipeline] timeout 00:02:43.932 Timeout set to expire in 40 min 00:02:43.933 [Pipeline] { 00:02:43.944 [Pipeline] stage 00:02:43.945 [Pipeline] { (Tests) 00:02:43.957 [Pipeline] sh 00:02:44.255 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:44.255 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:44.255 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:44.255 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:44.255 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:44.255 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:44.255 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:44.255 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:44.255 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:44.256 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:44.256 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:44.256 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:44.256 + source /etc/os-release 00:02:44.256 ++ NAME='Fedora Linux' 00:02:44.256 ++ VERSION='38 (Cloud Edition)' 00:02:44.256 ++ ID=fedora 00:02:44.256 ++ VERSION_ID=38 00:02:44.256 ++ VERSION_CODENAME= 00:02:44.256 ++ PLATFORM_ID=platform:f38 00:02:44.256 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:44.256 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:44.256 ++ LOGO=fedora-logo-icon 00:02:44.256 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:44.256 ++ HOME_URL=https://fedoraproject.org/ 00:02:44.256 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:44.256 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:44.256 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:44.256 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:44.256 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:44.256 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:44.256 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:44.256 ++ SUPPORT_END=2024-05-14 00:02:44.256 ++ VARIANT='Cloud Edition' 00:02:44.256 ++ VARIANT_ID=cloud 00:02:44.256 + uname -a 00:02:44.256 Linux spdk-cyp-13 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:44.256 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:47.574 Hugepages 00:02:47.574 node hugesize free / total 00:02:47.574 node0 1048576kB 0 / 0 00:02:47.574 node0 2048kB 0 / 0 00:02:47.574 node1 1048576kB 0 / 0 00:02:47.574 node1 2048kB 0 / 0 00:02:47.574 00:02:47.574 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:47.574 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:47.574 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:47.574 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:47.574 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:47.574 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:47.574 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:47.574 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:47.574 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:47.574 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:47.574 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:47.574 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:47.574 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:47.574 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:47.574 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:47.574 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:47.574 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:47.574 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:47.574 + rm -f /tmp/spdk-ld-path 00:02:47.574 + source autorun-spdk.conf 00:02:47.574 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.574 ++ SPDK_TEST_NVMF=1 00:02:47.574 ++ SPDK_TEST_NVME_CLI=1 00:02:47.574 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:47.574 ++ SPDK_RUN_UBSAN=1 00:02:47.574 ++ NET_TYPE=phy 00:02:47.574 ++ RUN_NIGHTLY=0 00:02:47.574 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:47.574 + [[ -n '' ]] 00:02:47.574 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:47.574 + for M in /var/spdk/build-*-manifest.txt 00:02:47.574 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:47.574 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:47.574 + for M in /var/spdk/build-*-manifest.txt 00:02:47.574 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:47.574 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:47.574 ++ uname 00:02:47.574 + [[ Linux == \L\i\n\u\x ]] 00:02:47.574 + sudo dmesg -T 00:02:47.574 + sudo dmesg --clear 00:02:47.574 + dmesg_pid=3282874 00:02:47.574 + [[ Fedora Linux == FreeBSD ]] 00:02:47.574 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.574 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.574 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:47.574 + [[ -x /usr/src/fio-static/fio ]] 00:02:47.574 + export FIO_BIN=/usr/src/fio-static/fio 00:02:47.574 + FIO_BIN=/usr/src/fio-static/fio 00:02:47.574 + sudo dmesg -Tw 00:02:47.574 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:47.574 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:47.574 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:47.574 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.574 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.574 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:47.574 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.574 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.574 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:47.574 Test configuration: 00:02:47.575 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.575 SPDK_TEST_NVMF=1 00:02:47.575 SPDK_TEST_NVME_CLI=1 00:02:47.575 SPDK_TEST_NVMF_NICS=mlx5 00:02:47.575 SPDK_RUN_UBSAN=1 00:02:47.575 NET_TYPE=phy 00:02:47.575 RUN_NIGHTLY=0 11:11:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:47.575 11:11:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:47.575 11:11:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.575 11:11:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.575 11:11:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.575 11:11:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.575 11:11:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.575 11:11:16 -- paths/export.sh@5 -- $ export PATH 00:02:47.575 11:11:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.575 11:11:16 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:47.575 11:11:16 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:47.575 11:11:16 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718010676.XXXXXX 00:02:47.575 11:11:16 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718010676.ITbvoq 00:02:47.575 11:11:16 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:47.575 11:11:16 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:47.575 11:11:16 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:02:47.575 11:11:16 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:47.575 11:11:16 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:47.575 11:11:16 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:47.575 11:11:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:47.575 11:11:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.575 11:11:16 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:02:47.575 11:11:16 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:47.575 11:11:16 -- pm/common@17 -- $ local monitor 00:02:47.575 11:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.575 11:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.575 11:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.575 11:11:16 -- pm/common@21 -- $ date +%s 00:02:47.575 11:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.575 11:11:16 -- pm/common@25 -- $ sleep 1 00:02:47.575 11:11:16 -- pm/common@21 -- $ date +%s 00:02:47.575 11:11:16 -- pm/common@21 -- $ date +%s 00:02:47.575 11:11:16 -- pm/common@21 -- $ date +%s 00:02:47.575 11:11:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718010676 00:02:47.575 11:11:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718010676 00:02:47.575 11:11:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718010676 00:02:47.575 11:11:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718010676 00:02:47.575 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718010676_collect-vmstat.pm.log 00:02:47.575 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718010676_collect-cpu-temp.pm.log 00:02:47.575 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718010676_collect-cpu-load.pm.log 00:02:47.575 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718010676_collect-bmc-pm.bmc.pm.log 00:02:48.518 11:11:17 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:48.518 11:11:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:48.518 11:11:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:48.518 11:11:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:48.518 11:11:17 -- spdk/autobuild.sh@16 -- $ date -u 00:02:48.518 Mon Jun 10 09:11:17 AM UTC 2024 00:02:48.518 11:11:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:48.518 v24.09-pre-60-gee2eae53a 00:02:48.518 11:11:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:48.518 11:11:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:48.518 11:11:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:48.518 11:11:17 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:48.518 11:11:17 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:48.518 11:11:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.518 ************************************ 00:02:48.518 START TEST ubsan 00:02:48.518 ************************************ 00:02:48.518 11:11:17 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:48.518 using ubsan 00:02:48.518 00:02:48.518 real 0m0.000s 00:02:48.518 user 0m0.000s 00:02:48.518 sys 0m0.000s 00:02:48.518 11:11:17 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:48.518 11:11:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:48.519 ************************************ 00:02:48.519 END TEST ubsan 00:02:48.519 ************************************ 00:02:48.519 11:11:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:48.519 11:11:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:48.519 11:11:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:48.519 11:11:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:48.519 11:11:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:48.519 11:11:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:48.519 11:11:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:48.519 11:11:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:48.519 11:11:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:02:48.780 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:48.780 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:49.042 Using 'verbs' RDMA provider 00:03:04.904 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:17.142 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:17.142 Creating mk/config.mk...done. 00:03:17.142 Creating mk/cc.flags.mk...done. 00:03:17.142 Type 'make' to build. 00:03:17.142 11:11:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:03:17.142 11:11:45 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:03:17.142 11:11:45 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:03:17.142 11:11:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.142 ************************************ 00:03:17.142 START TEST make 00:03:17.142 ************************************ 00:03:17.142 11:11:45 make -- common/autotest_common.sh@1124 -- $ make -j144 00:03:17.142 make[1]: Nothing to be done for 'all'. 00:03:25.327 The Meson build system 00:03:25.327 Version: 1.3.1 00:03:25.327 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:03:25.327 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:03:25.327 Build type: native build 00:03:25.327 Program cat found: YES (/usr/bin/cat) 00:03:25.327 Project name: DPDK 00:03:25.327 Project version: 24.03.0 00:03:25.327 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:25.327 C linker for the host machine: cc ld.bfd 2.39-16 00:03:25.327 Host machine cpu family: x86_64 00:03:25.327 Host machine cpu: x86_64 00:03:25.327 Message: ## Building in Developer Mode ## 00:03:25.327 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:25.327 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:25.327 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:25.327 Program python3 found: YES (/usr/bin/python3) 00:03:25.327 Program cat found: YES (/usr/bin/cat) 00:03:25.327 Compiler for C supports arguments -march=native: YES 00:03:25.327 Checking for size of "void *" : 8 00:03:25.327 Checking for size of "void *" : 8 (cached) 00:03:25.327 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:25.327 Library m found: YES 00:03:25.327 Library numa found: YES 00:03:25.327 Has header "numaif.h" : YES 00:03:25.327 Library fdt found: NO 00:03:25.327 Library execinfo found: NO 00:03:25.327 Has header "execinfo.h" : YES 00:03:25.327 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:25.328 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:25.328 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:25.328 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:25.328 Run-time dependency openssl found: YES 3.0.9 00:03:25.328 Run-time dependency libpcap found: YES 1.10.4 00:03:25.328 Has header "pcap.h" with dependency libpcap: YES 00:03:25.328 Compiler for C supports arguments -Wcast-qual: YES 00:03:25.328 Compiler for C supports arguments -Wdeprecated: YES 00:03:25.328 Compiler for C supports arguments -Wformat: YES 00:03:25.328 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:25.328 Compiler for C supports arguments -Wformat-security: NO 00:03:25.328 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:25.328 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:25.328 Compiler for C supports arguments -Wnested-externs: YES 00:03:25.328 Compiler for C supports arguments -Wold-style-definition: YES 00:03:25.328 Compiler for C supports arguments -Wpointer-arith: YES 00:03:25.328 Compiler for C supports arguments -Wsign-compare: YES 00:03:25.328 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:25.328 Compiler for C supports arguments -Wundef: YES 00:03:25.328 Compiler for C supports arguments -Wwrite-strings: YES 00:03:25.328 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:25.328 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:25.328 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:25.328 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:25.328 Program objdump found: YES (/usr/bin/objdump) 00:03:25.328 Compiler for C supports arguments -mavx512f: YES 00:03:25.328 Checking if "AVX512 checking" compiles: YES 00:03:25.328 Fetching value of define "__SSE4_2__" : 1 00:03:25.328 Fetching value of define "__AES__" : 1 00:03:25.328 Fetching value of define "__AVX__" : 1 00:03:25.328 Fetching value of define "__AVX2__" : 1 00:03:25.328 Fetching value of define "__AVX512BW__" : 1 00:03:25.328 Fetching value of define "__AVX512CD__" : 1 00:03:25.328 Fetching value of define "__AVX512DQ__" : 1 00:03:25.328 Fetching value of define "__AVX512F__" : 1 00:03:25.328 Fetching value of define "__AVX512VL__" : 1 00:03:25.328 Fetching value of define "__PCLMUL__" : 1 00:03:25.328 Fetching value of define "__RDRND__" : 1 00:03:25.328 Fetching value of define "__RDSEED__" : 1 00:03:25.328 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:25.328 Fetching value of define "__znver1__" : (undefined) 00:03:25.328 Fetching value of define "__znver2__" : (undefined) 00:03:25.328 Fetching value of define "__znver3__" : (undefined) 00:03:25.328 Fetching value of define "__znver4__" : (undefined) 00:03:25.328 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:25.328 Message: lib/log: Defining dependency "log" 00:03:25.328 Message: lib/kvargs: Defining dependency "kvargs" 00:03:25.328 Message: lib/telemetry: Defining dependency "telemetry" 00:03:25.328 Checking for function "getentropy" : NO 00:03:25.328 Message: lib/eal: Defining dependency "eal" 00:03:25.328 Message: lib/ring: Defining dependency "ring" 00:03:25.328 Message: lib/rcu: Defining dependency "rcu" 00:03:25.328 Message: lib/mempool: Defining dependency "mempool" 00:03:25.328 Message: lib/mbuf: Defining dependency "mbuf" 00:03:25.328 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:25.328 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:25.328 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:25.328 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:25.328 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:25.328 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:25.328 Compiler for C supports arguments -mpclmul: YES 00:03:25.328 Compiler for C supports arguments -maes: YES 00:03:25.328 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:25.328 Compiler for C supports arguments -mavx512bw: YES 00:03:25.328 Compiler for C supports arguments -mavx512dq: YES 00:03:25.328 Compiler for C supports arguments -mavx512vl: YES 00:03:25.328 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:25.328 Compiler for C supports arguments -mavx2: YES 00:03:25.328 Compiler for C supports arguments -mavx: YES 00:03:25.328 Message: lib/net: Defining dependency "net" 00:03:25.328 Message: lib/meter: Defining dependency "meter" 00:03:25.328 Message: lib/ethdev: Defining dependency "ethdev" 00:03:25.328 Message: lib/pci: Defining dependency "pci" 00:03:25.328 Message: lib/cmdline: Defining dependency "cmdline" 00:03:25.328 Message: lib/hash: Defining dependency "hash" 00:03:25.328 Message: lib/timer: Defining dependency "timer" 00:03:25.328 Message: lib/compressdev: Defining dependency "compressdev" 00:03:25.328 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:25.328 Message: lib/dmadev: Defining dependency "dmadev" 00:03:25.328 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:25.328 Message: lib/power: Defining dependency "power" 00:03:25.328 Message: lib/reorder: Defining dependency "reorder" 00:03:25.328 Message: lib/security: Defining dependency "security" 00:03:25.328 Has header "linux/userfaultfd.h" : YES 00:03:25.328 Has header "linux/vduse.h" : YES 00:03:25.328 Message: lib/vhost: Defining dependency "vhost" 00:03:25.328 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:25.328 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:25.328 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:25.328 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:25.328 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:25.328 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:25.328 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:25.328 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:25.328 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:25.328 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:25.328 Program doxygen found: YES (/usr/bin/doxygen) 00:03:25.328 Configuring doxy-api-html.conf using configuration 00:03:25.328 Configuring doxy-api-man.conf using configuration 00:03:25.328 Program mandb found: YES (/usr/bin/mandb) 00:03:25.328 Program sphinx-build found: NO 00:03:25.328 Configuring rte_build_config.h using configuration 00:03:25.328 Message: 00:03:25.328 ================= 00:03:25.328 Applications Enabled 00:03:25.328 ================= 00:03:25.328 00:03:25.328 apps: 00:03:25.328 00:03:25.328 00:03:25.328 Message: 00:03:25.328 ================= 00:03:25.328 Libraries Enabled 00:03:25.328 ================= 00:03:25.328 00:03:25.328 libs: 00:03:25.328 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:25.328 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:25.328 cryptodev, dmadev, power, reorder, security, vhost, 00:03:25.328 00:03:25.328 Message: 00:03:25.328 =============== 00:03:25.328 Drivers Enabled 00:03:25.328 =============== 00:03:25.328 00:03:25.328 common: 00:03:25.328 00:03:25.328 bus: 00:03:25.328 pci, vdev, 00:03:25.328 mempool: 00:03:25.328 ring, 00:03:25.328 dma: 00:03:25.328 00:03:25.328 net: 00:03:25.328 00:03:25.328 crypto: 00:03:25.328 00:03:25.328 compress: 00:03:25.328 00:03:25.328 vdpa: 00:03:25.328 00:03:25.328 00:03:25.328 Message: 00:03:25.328 ================= 00:03:25.328 Content Skipped 00:03:25.328 ================= 00:03:25.328 00:03:25.328 apps: 00:03:25.328 dumpcap: explicitly disabled via build config 00:03:25.328 graph: explicitly disabled via build config 00:03:25.328 pdump: explicitly disabled via build config 00:03:25.328 proc-info: explicitly disabled via build config 00:03:25.328 test-acl: explicitly disabled via build config 00:03:25.328 test-bbdev: explicitly disabled via build config 00:03:25.328 test-cmdline: explicitly disabled via build config 00:03:25.328 test-compress-perf: explicitly disabled via build config 00:03:25.328 test-crypto-perf: explicitly disabled via build config 00:03:25.328 test-dma-perf: explicitly disabled via build config 00:03:25.328 test-eventdev: explicitly disabled via build config 00:03:25.328 test-fib: explicitly disabled via build config 00:03:25.328 test-flow-perf: explicitly disabled via build config 00:03:25.328 test-gpudev: explicitly disabled via build config 00:03:25.328 test-mldev: explicitly disabled via build config 00:03:25.328 test-pipeline: explicitly disabled via build config 00:03:25.328 test-pmd: explicitly disabled via build config 00:03:25.328 test-regex: explicitly disabled via build config 00:03:25.328 test-sad: explicitly disabled via build config 00:03:25.328 test-security-perf: explicitly disabled via build config 00:03:25.328 00:03:25.328 libs: 00:03:25.328 argparse: explicitly disabled via build config 00:03:25.328 metrics: explicitly disabled via build config 00:03:25.328 acl: explicitly disabled via build config 00:03:25.328 bbdev: explicitly disabled via build config 00:03:25.328 bitratestats: explicitly disabled via build config 00:03:25.328 bpf: explicitly disabled via build config 00:03:25.328 cfgfile: explicitly disabled via build config 00:03:25.328 distributor: explicitly disabled via build config 00:03:25.328 efd: explicitly disabled via build config 00:03:25.328 eventdev: explicitly disabled via build config 00:03:25.328 dispatcher: explicitly disabled via build config 00:03:25.328 gpudev: explicitly disabled via build config 00:03:25.328 gro: explicitly disabled via build config 00:03:25.328 gso: explicitly disabled via build config 00:03:25.328 ip_frag: explicitly disabled via build config 00:03:25.328 jobstats: explicitly disabled via build config 00:03:25.328 latencystats: explicitly disabled via build config 00:03:25.328 lpm: explicitly disabled via build config 00:03:25.328 member: explicitly disabled via build config 00:03:25.328 pcapng: explicitly disabled via build config 00:03:25.329 rawdev: explicitly disabled via build config 00:03:25.329 regexdev: explicitly disabled via build config 00:03:25.329 mldev: explicitly disabled via build config 00:03:25.329 rib: explicitly disabled via build config 00:03:25.329 sched: explicitly disabled via build config 00:03:25.329 stack: explicitly disabled via build config 00:03:25.329 ipsec: explicitly disabled via build config 00:03:25.329 pdcp: explicitly disabled via build config 00:03:25.329 fib: explicitly disabled via build config 00:03:25.329 port: explicitly disabled via build config 00:03:25.329 pdump: explicitly disabled via build config 00:03:25.329 table: explicitly disabled via build config 00:03:25.329 pipeline: explicitly disabled via build config 00:03:25.329 graph: explicitly disabled via build config 00:03:25.329 node: explicitly disabled via build config 00:03:25.329 00:03:25.329 drivers: 00:03:25.329 common/cpt: not in enabled drivers build config 00:03:25.329 common/dpaax: not in enabled drivers build config 00:03:25.329 common/iavf: not in enabled drivers build config 00:03:25.329 common/idpf: not in enabled drivers build config 00:03:25.329 common/ionic: not in enabled drivers build config 00:03:25.329 common/mvep: not in enabled drivers build config 00:03:25.329 common/octeontx: not in enabled drivers build config 00:03:25.329 bus/auxiliary: not in enabled drivers build config 00:03:25.329 bus/cdx: not in enabled drivers build config 00:03:25.329 bus/dpaa: not in enabled drivers build config 00:03:25.329 bus/fslmc: not in enabled drivers build config 00:03:25.329 bus/ifpga: not in enabled drivers build config 00:03:25.329 bus/platform: not in enabled drivers build config 00:03:25.329 bus/uacce: not in enabled drivers build config 00:03:25.329 bus/vmbus: not in enabled drivers build config 00:03:25.329 common/cnxk: not in enabled drivers build config 00:03:25.329 common/mlx5: not in enabled drivers build config 00:03:25.329 common/nfp: not in enabled drivers build config 00:03:25.329 common/nitrox: not in enabled drivers build config 00:03:25.329 common/qat: not in enabled drivers build config 00:03:25.329 common/sfc_efx: not in enabled drivers build config 00:03:25.329 mempool/bucket: not in enabled drivers build config 00:03:25.329 mempool/cnxk: not in enabled drivers build config 00:03:25.329 mempool/dpaa: not in enabled drivers build config 00:03:25.329 mempool/dpaa2: not in enabled drivers build config 00:03:25.329 mempool/octeontx: not in enabled drivers build config 00:03:25.329 mempool/stack: not in enabled drivers build config 00:03:25.329 dma/cnxk: not in enabled drivers build config 00:03:25.329 dma/dpaa: not in enabled drivers build config 00:03:25.329 dma/dpaa2: not in enabled drivers build config 00:03:25.329 dma/hisilicon: not in enabled drivers build config 00:03:25.329 dma/idxd: not in enabled drivers build config 00:03:25.329 dma/ioat: not in enabled drivers build config 00:03:25.329 dma/skeleton: not in enabled drivers build config 00:03:25.329 net/af_packet: not in enabled drivers build config 00:03:25.329 net/af_xdp: not in enabled drivers build config 00:03:25.329 net/ark: not in enabled drivers build config 00:03:25.329 net/atlantic: not in enabled drivers build config 00:03:25.329 net/avp: not in enabled drivers build config 00:03:25.329 net/axgbe: not in enabled drivers build config 00:03:25.329 net/bnx2x: not in enabled drivers build config 00:03:25.329 net/bnxt: not in enabled drivers build config 00:03:25.329 net/bonding: not in enabled drivers build config 00:03:25.329 net/cnxk: not in enabled drivers build config 00:03:25.329 net/cpfl: not in enabled drivers build config 00:03:25.329 net/cxgbe: not in enabled drivers build config 00:03:25.329 net/dpaa: not in enabled drivers build config 00:03:25.329 net/dpaa2: not in enabled drivers build config 00:03:25.329 net/e1000: not in enabled drivers build config 00:03:25.329 net/ena: not in enabled drivers build config 00:03:25.329 net/enetc: not in enabled drivers build config 00:03:25.329 net/enetfec: not in enabled drivers build config 00:03:25.329 net/enic: not in enabled drivers build config 00:03:25.329 net/failsafe: not in enabled drivers build config 00:03:25.329 net/fm10k: not in enabled drivers build config 00:03:25.329 net/gve: not in enabled drivers build config 00:03:25.329 net/hinic: not in enabled drivers build config 00:03:25.329 net/hns3: not in enabled drivers build config 00:03:25.329 net/i40e: not in enabled drivers build config 00:03:25.329 net/iavf: not in enabled drivers build config 00:03:25.329 net/ice: not in enabled drivers build config 00:03:25.329 net/idpf: not in enabled drivers build config 00:03:25.329 net/igc: not in enabled drivers build config 00:03:25.329 net/ionic: not in enabled drivers build config 00:03:25.329 net/ipn3ke: not in enabled drivers build config 00:03:25.329 net/ixgbe: not in enabled drivers build config 00:03:25.329 net/mana: not in enabled drivers build config 00:03:25.329 net/memif: not in enabled drivers build config 00:03:25.329 net/mlx4: not in enabled drivers build config 00:03:25.329 net/mlx5: not in enabled drivers build config 00:03:25.329 net/mvneta: not in enabled drivers build config 00:03:25.329 net/mvpp2: not in enabled drivers build config 00:03:25.329 net/netvsc: not in enabled drivers build config 00:03:25.329 net/nfb: not in enabled drivers build config 00:03:25.329 net/nfp: not in enabled drivers build config 00:03:25.329 net/ngbe: not in enabled drivers build config 00:03:25.329 net/null: not in enabled drivers build config 00:03:25.329 net/octeontx: not in enabled drivers build config 00:03:25.329 net/octeon_ep: not in enabled drivers build config 00:03:25.329 net/pcap: not in enabled drivers build config 00:03:25.329 net/pfe: not in enabled drivers build config 00:03:25.329 net/qede: not in enabled drivers build config 00:03:25.329 net/ring: not in enabled drivers build config 00:03:25.329 net/sfc: not in enabled drivers build config 00:03:25.329 net/softnic: not in enabled drivers build config 00:03:25.329 net/tap: not in enabled drivers build config 00:03:25.329 net/thunderx: not in enabled drivers build config 00:03:25.329 net/txgbe: not in enabled drivers build config 00:03:25.329 net/vdev_netvsc: not in enabled drivers build config 00:03:25.329 net/vhost: not in enabled drivers build config 00:03:25.329 net/virtio: not in enabled drivers build config 00:03:25.329 net/vmxnet3: not in enabled drivers build config 00:03:25.329 raw/*: missing internal dependency, "rawdev" 00:03:25.329 crypto/armv8: not in enabled drivers build config 00:03:25.329 crypto/bcmfs: not in enabled drivers build config 00:03:25.329 crypto/caam_jr: not in enabled drivers build config 00:03:25.329 crypto/ccp: not in enabled drivers build config 00:03:25.329 crypto/cnxk: not in enabled drivers build config 00:03:25.329 crypto/dpaa_sec: not in enabled drivers build config 00:03:25.329 crypto/dpaa2_sec: not in enabled drivers build config 00:03:25.329 crypto/ipsec_mb: not in enabled drivers build config 00:03:25.329 crypto/mlx5: not in enabled drivers build config 00:03:25.329 crypto/mvsam: not in enabled drivers build config 00:03:25.329 crypto/nitrox: not in enabled drivers build config 00:03:25.329 crypto/null: not in enabled drivers build config 00:03:25.329 crypto/octeontx: not in enabled drivers build config 00:03:25.329 crypto/openssl: not in enabled drivers build config 00:03:25.329 crypto/scheduler: not in enabled drivers build config 00:03:25.329 crypto/uadk: not in enabled drivers build config 00:03:25.329 crypto/virtio: not in enabled drivers build config 00:03:25.329 compress/isal: not in enabled drivers build config 00:03:25.329 compress/mlx5: not in enabled drivers build config 00:03:25.329 compress/nitrox: not in enabled drivers build config 00:03:25.329 compress/octeontx: not in enabled drivers build config 00:03:25.329 compress/zlib: not in enabled drivers build config 00:03:25.329 regex/*: missing internal dependency, "regexdev" 00:03:25.329 ml/*: missing internal dependency, "mldev" 00:03:25.329 vdpa/ifc: not in enabled drivers build config 00:03:25.329 vdpa/mlx5: not in enabled drivers build config 00:03:25.329 vdpa/nfp: not in enabled drivers build config 00:03:25.329 vdpa/sfc: not in enabled drivers build config 00:03:25.329 event/*: missing internal dependency, "eventdev" 00:03:25.329 baseband/*: missing internal dependency, "bbdev" 00:03:25.329 gpu/*: missing internal dependency, "gpudev" 00:03:25.329 00:03:25.329 00:03:25.590 Build targets in project: 84 00:03:25.590 00:03:25.590 DPDK 24.03.0 00:03:25.590 00:03:25.590 User defined options 00:03:25.590 buildtype : debug 00:03:25.590 default_library : shared 00:03:25.590 libdir : lib 00:03:25.590 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:25.590 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:25.590 c_link_args : 00:03:25.590 cpu_instruction_set: native 00:03:25.590 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:03:25.590 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:03:25.590 enable_docs : false 00:03:25.590 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:25.590 enable_kmods : false 00:03:25.590 tests : false 00:03:25.590 00:03:25.590 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:25.856 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:03:25.856 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:25.856 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:25.856 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:25.856 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:25.856 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:25.856 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:26.116 [7/267] Linking static target lib/librte_kvargs.a 00:03:26.116 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:26.116 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:26.116 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:26.116 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:26.116 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:26.116 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:26.116 [14/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:26.116 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:26.116 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:26.116 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:26.116 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:26.116 [19/267] Linking static target lib/librte_log.a 00:03:26.116 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:26.116 [21/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:26.116 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:26.116 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:26.116 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:26.116 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:26.116 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:26.116 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:26.116 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:26.116 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:26.117 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:26.117 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:26.117 [32/267] Linking static target lib/librte_pci.a 00:03:26.117 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:26.376 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:26.376 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:26.376 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:26.376 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:26.376 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:26.376 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:26.376 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:26.376 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:26.376 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.376 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:26.376 [44/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.376 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:26.376 [46/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:26.376 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:26.376 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:26.376 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:26.376 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:26.376 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:26.376 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:26.376 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:26.376 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:26.637 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:26.637 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:26.637 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:26.637 [58/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:26.637 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:26.637 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:26.637 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:26.637 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:26.637 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:26.637 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:26.637 [65/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:26.637 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:26.637 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:26.637 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:26.637 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:26.637 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:26.637 [71/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:26.637 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:26.637 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:26.637 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:26.637 [75/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:26.637 [76/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:26.637 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:26.637 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:26.637 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:26.637 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:26.637 [81/267] Linking static target lib/librte_meter.a 00:03:26.637 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:26.637 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:26.637 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:26.637 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:26.637 [86/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:26.637 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:26.637 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:26.637 [89/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:26.637 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:26.637 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:26.637 [92/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:26.637 [93/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:26.637 [94/267] Linking static target lib/librte_telemetry.a 00:03:26.637 [95/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:26.637 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:26.637 [97/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:26.637 [98/267] Linking static target lib/librte_ring.a 00:03:26.637 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:26.637 [100/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:26.637 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:26.637 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:26.637 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:26.637 [104/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:26.637 [105/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:26.637 [106/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:26.637 [107/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:26.637 [108/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:26.637 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:26.637 [110/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:26.637 [111/267] Linking static target lib/librte_reorder.a 00:03:26.637 [112/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:26.637 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:26.637 [114/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:26.637 [115/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:26.637 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:26.637 [117/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:26.637 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:26.637 [119/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:26.637 [120/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:26.637 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:26.637 [122/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:26.637 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:26.637 [124/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:26.637 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:26.637 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:26.637 [127/267] Linking static target lib/librte_timer.a 00:03:26.637 [128/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:26.637 [129/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:26.637 [130/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:26.637 [131/267] Linking static target lib/librte_cmdline.a 00:03:26.637 [132/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.637 [133/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:26.637 [134/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:26.637 [135/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:26.637 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:26.637 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:26.637 [138/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:26.637 [139/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:26.637 [140/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:26.637 [141/267] Linking static target lib/librte_dmadev.a 00:03:26.637 [142/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:26.637 [143/267] Linking static target lib/librte_mempool.a 00:03:26.637 [144/267] Linking target lib/librte_log.so.24.1 00:03:26.637 [145/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:26.637 [146/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:26.637 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:26.638 [148/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:26.638 [149/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:26.638 [150/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:26.638 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:26.638 [152/267] Linking static target lib/librte_net.a 00:03:26.638 [153/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:26.638 [154/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:26.638 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:26.638 [156/267] Linking static target lib/librte_power.a 00:03:26.638 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:26.638 [158/267] Linking static target lib/librte_compressdev.a 00:03:26.638 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:26.638 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:26.638 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:26.638 [162/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:26.638 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:26.638 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:26.638 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:26.638 [166/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:26.638 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:26.898 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:26.898 [169/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:26.898 [170/267] Linking static target lib/librte_rcu.a 00:03:26.898 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:26.898 [172/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:26.898 [173/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:26.898 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:26.898 [175/267] Linking static target lib/librte_security.a 00:03:26.898 [176/267] Linking static target lib/librte_eal.a 00:03:26.898 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:26.898 [178/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.898 [179/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.898 [180/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:26.898 [181/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:26.898 [182/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.898 [183/267] Linking static target lib/librte_cryptodev.a 00:03:26.898 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.898 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:26.898 [186/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:26.898 [187/267] Linking static target drivers/librte_bus_vdev.a 00:03:26.898 [188/267] Linking target lib/librte_kvargs.so.24.1 00:03:26.898 [189/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.898 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:26.898 [191/267] Linking static target drivers/librte_mempool_ring.a 00:03:26.898 [192/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.898 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:26.898 [194/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:26.898 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.898 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.899 [197/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.899 [198/267] Linking static target lib/librte_mbuf.a 00:03:26.899 [199/267] Linking static target drivers/librte_bus_pci.a 00:03:26.899 [200/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:26.899 [201/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:26.899 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:26.899 [203/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:27.183 [204/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:27.183 [205/267] Linking static target lib/librte_hash.a 00:03:27.183 [206/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.183 [207/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.183 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:27.183 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.183 [210/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.183 [211/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.183 [212/267] Linking target lib/librte_telemetry.so.24.1 00:03:27.183 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.500 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:27.500 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.500 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.500 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:27.500 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.761 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:27.761 [220/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.761 [221/267] Linking static target lib/librte_ethdev.a 00:03:27.761 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.761 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.761 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.023 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.023 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.024 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.024 [228/267] Linking static target lib/librte_vhost.a 00:03:28.968 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.354 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.946 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.333 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.333 [233/267] Linking target lib/librte_eal.so.24.1 00:03:38.333 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:38.333 [235/267] Linking target lib/librte_ring.so.24.1 00:03:38.333 [236/267] Linking target lib/librte_meter.so.24.1 00:03:38.333 [237/267] Linking target lib/librte_pci.so.24.1 00:03:38.333 [238/267] Linking target lib/librte_timer.so.24.1 00:03:38.333 [239/267] Linking target lib/librte_dmadev.so.24.1 00:03:38.333 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:38.594 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:38.595 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:38.595 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:38.595 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:38.595 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:38.595 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:38.595 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:38.595 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:38.595 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:38.595 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:38.855 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:38.855 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:38.855 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:38.855 [254/267] Linking target lib/librte_net.so.24.1 00:03:38.855 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:38.855 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:38.855 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:39.116 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:39.116 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:39.116 [260/267] Linking target lib/librte_hash.so.24.1 00:03:39.116 [261/267] Linking target lib/librte_cmdline.so.24.1 00:03:39.116 [262/267] Linking target lib/librte_security.so.24.1 00:03:39.116 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:39.116 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:39.378 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:39.378 [266/267] Linking target lib/librte_power.so.24.1 00:03:39.378 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:39.378 INFO: autodetecting backend as ninja 00:03:39.378 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:40.333 CC lib/ut/ut.o 00:03:40.333 CC lib/ut_mock/mock.o 00:03:40.333 CC lib/log/log.o 00:03:40.333 CC lib/log/log_flags.o 00:03:40.333 CC lib/log/log_deprecated.o 00:03:40.595 LIB libspdk_ut.a 00:03:40.595 LIB libspdk_log.a 00:03:40.595 LIB libspdk_ut_mock.a 00:03:40.595 SO libspdk_ut.so.2.0 00:03:40.595 SO libspdk_log.so.7.0 00:03:40.595 SO libspdk_ut_mock.so.6.0 00:03:40.595 SYMLINK libspdk_ut.so 00:03:40.595 SYMLINK libspdk_log.so 00:03:40.595 SYMLINK libspdk_ut_mock.so 00:03:40.856 CC lib/util/base64.o 00:03:40.856 CC lib/util/bit_array.o 00:03:41.117 CC lib/dma/dma.o 00:03:41.117 CC lib/util/cpuset.o 00:03:41.117 CC lib/util/crc16.o 00:03:41.117 CC lib/util/crc32.o 00:03:41.117 CC lib/util/crc32c.o 00:03:41.117 CC lib/util/crc32_ieee.o 00:03:41.117 CC lib/ioat/ioat.o 00:03:41.117 CXX lib/trace_parser/trace.o 00:03:41.117 CC lib/util/crc64.o 00:03:41.117 CC lib/util/dif.o 00:03:41.117 CC lib/util/fd.o 00:03:41.117 CC lib/util/file.o 00:03:41.117 CC lib/util/hexlify.o 00:03:41.117 CC lib/util/iov.o 00:03:41.117 CC lib/util/math.o 00:03:41.117 CC lib/util/pipe.o 00:03:41.117 CC lib/util/strerror_tls.o 00:03:41.117 CC lib/util/string.o 00:03:41.117 CC lib/util/uuid.o 00:03:41.117 CC lib/util/xor.o 00:03:41.117 CC lib/util/fd_group.o 00:03:41.117 CC lib/util/zipf.o 00:03:41.117 CC lib/vfio_user/host/vfio_user_pci.o 00:03:41.117 CC lib/vfio_user/host/vfio_user.o 00:03:41.117 LIB libspdk_dma.a 00:03:41.117 SO libspdk_dma.so.4.0 00:03:41.378 LIB libspdk_ioat.a 00:03:41.378 SYMLINK libspdk_dma.so 00:03:41.378 SO libspdk_ioat.so.7.0 00:03:41.378 SYMLINK libspdk_ioat.so 00:03:41.378 LIB libspdk_vfio_user.a 00:03:41.378 SO libspdk_vfio_user.so.5.0 00:03:41.378 LIB libspdk_util.a 00:03:41.378 SYMLINK libspdk_vfio_user.so 00:03:41.639 SO libspdk_util.so.9.0 00:03:41.639 SYMLINK libspdk_util.so 00:03:41.639 LIB libspdk_trace_parser.a 00:03:41.901 SO libspdk_trace_parser.so.5.0 00:03:41.901 SYMLINK libspdk_trace_parser.so 00:03:41.901 CC lib/env_dpdk/env.o 00:03:41.901 CC lib/env_dpdk/memory.o 00:03:41.901 CC lib/env_dpdk/threads.o 00:03:41.901 CC lib/env_dpdk/pci.o 00:03:41.901 CC lib/env_dpdk/init.o 00:03:41.901 CC lib/env_dpdk/pci_ioat.o 00:03:41.901 CC lib/env_dpdk/pci_virtio.o 00:03:41.901 CC lib/env_dpdk/pci_vmd.o 00:03:41.901 CC lib/env_dpdk/pci_idxd.o 00:03:41.901 CC lib/env_dpdk/pci_event.o 00:03:41.901 CC lib/env_dpdk/sigbus_handler.o 00:03:41.901 CC lib/env_dpdk/pci_dpdk.o 00:03:41.901 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:41.901 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:41.901 CC lib/json/json_parse.o 00:03:41.901 CC lib/json/json_util.o 00:03:41.901 CC lib/json/json_write.o 00:03:41.901 CC lib/conf/conf.o 00:03:41.901 CC lib/rdma/common.o 00:03:41.901 CC lib/idxd/idxd.o 00:03:41.901 CC lib/vmd/vmd.o 00:03:41.901 CC lib/idxd/idxd_user.o 00:03:41.901 CC lib/rdma/rdma_verbs.o 00:03:41.901 CC lib/vmd/led.o 00:03:41.901 CC lib/idxd/idxd_kernel.o 00:03:42.162 LIB libspdk_conf.a 00:03:42.162 SO libspdk_conf.so.6.0 00:03:42.162 LIB libspdk_json.a 00:03:42.162 LIB libspdk_rdma.a 00:03:42.162 SO libspdk_json.so.6.0 00:03:42.162 SYMLINK libspdk_conf.so 00:03:42.423 SO libspdk_rdma.so.6.0 00:03:42.423 SYMLINK libspdk_json.so 00:03:42.423 SYMLINK libspdk_rdma.so 00:03:42.423 LIB libspdk_idxd.a 00:03:42.423 SO libspdk_idxd.so.12.0 00:03:42.423 LIB libspdk_vmd.a 00:03:42.684 SO libspdk_vmd.so.6.0 00:03:42.684 SYMLINK libspdk_idxd.so 00:03:42.684 SYMLINK libspdk_vmd.so 00:03:42.684 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:42.684 CC lib/jsonrpc/jsonrpc_server.o 00:03:42.684 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:42.684 CC lib/jsonrpc/jsonrpc_client.o 00:03:42.946 LIB libspdk_jsonrpc.a 00:03:42.946 SO libspdk_jsonrpc.so.6.0 00:03:43.207 SYMLINK libspdk_jsonrpc.so 00:03:43.207 LIB libspdk_env_dpdk.a 00:03:43.207 SO libspdk_env_dpdk.so.14.1 00:03:43.466 SYMLINK libspdk_env_dpdk.so 00:03:43.466 CC lib/rpc/rpc.o 00:03:43.726 LIB libspdk_rpc.a 00:03:43.726 SO libspdk_rpc.so.6.0 00:03:43.726 SYMLINK libspdk_rpc.so 00:03:43.987 CC lib/keyring/keyring.o 00:03:43.987 CC lib/keyring/keyring_rpc.o 00:03:43.987 CC lib/notify/notify.o 00:03:43.987 CC lib/notify/notify_rpc.o 00:03:43.987 CC lib/trace/trace.o 00:03:43.987 CC lib/trace/trace_flags.o 00:03:43.987 CC lib/trace/trace_rpc.o 00:03:44.248 LIB libspdk_notify.a 00:03:44.248 LIB libspdk_keyring.a 00:03:44.248 SO libspdk_notify.so.6.0 00:03:44.248 SO libspdk_keyring.so.1.0 00:03:44.248 LIB libspdk_trace.a 00:03:44.509 SYMLINK libspdk_notify.so 00:03:44.509 SO libspdk_trace.so.10.0 00:03:44.509 SYMLINK libspdk_keyring.so 00:03:44.509 SYMLINK libspdk_trace.so 00:03:44.770 CC lib/thread/thread.o 00:03:44.770 CC lib/thread/iobuf.o 00:03:44.770 CC lib/sock/sock.o 00:03:44.770 CC lib/sock/sock_rpc.o 00:03:45.035 LIB libspdk_sock.a 00:03:45.311 SO libspdk_sock.so.9.0 00:03:45.311 SYMLINK libspdk_sock.so 00:03:45.581 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:45.581 CC lib/nvme/nvme_ctrlr.o 00:03:45.581 CC lib/nvme/nvme_fabric.o 00:03:45.581 CC lib/nvme/nvme_ns_cmd.o 00:03:45.581 CC lib/nvme/nvme_ns.o 00:03:45.581 CC lib/nvme/nvme_pcie_common.o 00:03:45.581 CC lib/nvme/nvme_pcie.o 00:03:45.581 CC lib/nvme/nvme_qpair.o 00:03:45.581 CC lib/nvme/nvme.o 00:03:45.581 CC lib/nvme/nvme_quirks.o 00:03:45.581 CC lib/nvme/nvme_transport.o 00:03:45.581 CC lib/nvme/nvme_discovery.o 00:03:45.581 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:45.581 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:45.581 CC lib/nvme/nvme_tcp.o 00:03:45.581 CC lib/nvme/nvme_opal.o 00:03:45.581 CC lib/nvme/nvme_io_msg.o 00:03:45.581 CC lib/nvme/nvme_poll_group.o 00:03:45.581 CC lib/nvme/nvme_zns.o 00:03:45.581 CC lib/nvme/nvme_stubs.o 00:03:45.581 CC lib/nvme/nvme_cuse.o 00:03:45.581 CC lib/nvme/nvme_auth.o 00:03:45.581 CC lib/nvme/nvme_rdma.o 00:03:46.152 LIB libspdk_thread.a 00:03:46.152 SO libspdk_thread.so.10.0 00:03:46.152 SYMLINK libspdk_thread.so 00:03:46.413 CC lib/blob/blobstore.o 00:03:46.413 CC lib/blob/request.o 00:03:46.413 CC lib/blob/zeroes.o 00:03:46.413 CC lib/blob/blob_bs_dev.o 00:03:46.413 CC lib/virtio/virtio.o 00:03:46.413 CC lib/virtio/virtio_vhost_user.o 00:03:46.413 CC lib/virtio/virtio_vfio_user.o 00:03:46.413 CC lib/virtio/virtio_pci.o 00:03:46.413 CC lib/init/json_config.o 00:03:46.414 CC lib/init/subsystem.o 00:03:46.414 CC lib/init/subsystem_rpc.o 00:03:46.414 CC lib/init/rpc.o 00:03:46.414 CC lib/accel/accel.o 00:03:46.414 CC lib/accel/accel_rpc.o 00:03:46.414 CC lib/accel/accel_sw.o 00:03:46.675 LIB libspdk_init.a 00:03:46.675 SO libspdk_init.so.5.0 00:03:46.675 LIB libspdk_virtio.a 00:03:46.937 SO libspdk_virtio.so.7.0 00:03:46.937 SYMLINK libspdk_init.so 00:03:46.937 SYMLINK libspdk_virtio.so 00:03:47.198 CC lib/event/app.o 00:03:47.198 CC lib/event/reactor.o 00:03:47.199 CC lib/event/log_rpc.o 00:03:47.199 CC lib/event/app_rpc.o 00:03:47.199 CC lib/event/scheduler_static.o 00:03:47.460 LIB libspdk_accel.a 00:03:47.460 SO libspdk_accel.so.15.0 00:03:47.460 LIB libspdk_nvme.a 00:03:47.460 SYMLINK libspdk_accel.so 00:03:47.460 SO libspdk_nvme.so.13.1 00:03:47.460 LIB libspdk_event.a 00:03:47.722 SO libspdk_event.so.13.1 00:03:47.722 SYMLINK libspdk_event.so 00:03:47.722 SYMLINK libspdk_nvme.so 00:03:47.722 CC lib/bdev/bdev.o 00:03:47.722 CC lib/bdev/bdev_rpc.o 00:03:47.722 CC lib/bdev/bdev_zone.o 00:03:47.722 CC lib/bdev/scsi_nvme.o 00:03:47.722 CC lib/bdev/part.o 00:03:49.110 LIB libspdk_blob.a 00:03:49.110 SO libspdk_blob.so.11.0 00:03:49.110 SYMLINK libspdk_blob.so 00:03:49.371 CC lib/blobfs/blobfs.o 00:03:49.371 CC lib/blobfs/tree.o 00:03:49.371 CC lib/lvol/lvol.o 00:03:49.947 LIB libspdk_bdev.a 00:03:49.947 SO libspdk_bdev.so.15.0 00:03:50.208 LIB libspdk_blobfs.a 00:03:50.208 SYMLINK libspdk_bdev.so 00:03:50.208 SO libspdk_blobfs.so.10.0 00:03:50.208 LIB libspdk_lvol.a 00:03:50.208 SYMLINK libspdk_blobfs.so 00:03:50.209 SO libspdk_lvol.so.10.0 00:03:50.209 SYMLINK libspdk_lvol.so 00:03:50.469 CC lib/nbd/nbd.o 00:03:50.469 CC lib/nbd/nbd_rpc.o 00:03:50.469 CC lib/nvmf/ctrlr.o 00:03:50.469 CC lib/ftl/ftl_core.o 00:03:50.469 CC lib/ublk/ublk.o 00:03:50.469 CC lib/nvmf/ctrlr_discovery.o 00:03:50.469 CC lib/ftl/ftl_init.o 00:03:50.469 CC lib/ublk/ublk_rpc.o 00:03:50.469 CC lib/nvmf/ctrlr_bdev.o 00:03:50.469 CC lib/scsi/dev.o 00:03:50.469 CC lib/ftl/ftl_layout.o 00:03:50.469 CC lib/scsi/lun.o 00:03:50.469 CC lib/ftl/ftl_debug.o 00:03:50.469 CC lib/nvmf/subsystem.o 00:03:50.469 CC lib/nvmf/nvmf.o 00:03:50.469 CC lib/scsi/port.o 00:03:50.469 CC lib/ftl/ftl_io.o 00:03:50.469 CC lib/nvmf/nvmf_rpc.o 00:03:50.469 CC lib/scsi/scsi.o 00:03:50.469 CC lib/ftl/ftl_sb.o 00:03:50.469 CC lib/nvmf/transport.o 00:03:50.469 CC lib/scsi/scsi_bdev.o 00:03:50.469 CC lib/ftl/ftl_l2p.o 00:03:50.469 CC lib/nvmf/tcp.o 00:03:50.469 CC lib/ftl/ftl_l2p_flat.o 00:03:50.469 CC lib/scsi/scsi_pr.o 00:03:50.469 CC lib/nvmf/stubs.o 00:03:50.469 CC lib/ftl/ftl_nv_cache.o 00:03:50.469 CC lib/scsi/scsi_rpc.o 00:03:50.469 CC lib/nvmf/mdns_server.o 00:03:50.469 CC lib/scsi/task.o 00:03:50.469 CC lib/ftl/ftl_band.o 00:03:50.469 CC lib/nvmf/rdma.o 00:03:50.469 CC lib/ftl/ftl_band_ops.o 00:03:50.469 CC lib/nvmf/auth.o 00:03:50.469 CC lib/ftl/ftl_writer.o 00:03:50.469 CC lib/ftl/ftl_rq.o 00:03:50.469 CC lib/ftl/ftl_reloc.o 00:03:50.469 CC lib/ftl/ftl_l2p_cache.o 00:03:50.469 CC lib/ftl/ftl_p2l.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:50.469 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:50.469 CC lib/ftl/utils/ftl_conf.o 00:03:50.469 CC lib/ftl/utils/ftl_md.o 00:03:50.469 CC lib/ftl/utils/ftl_mempool.o 00:03:50.469 CC lib/ftl/utils/ftl_bitmap.o 00:03:50.469 CC lib/ftl/utils/ftl_property.o 00:03:50.469 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:50.469 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:50.469 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:50.469 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:50.469 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:50.469 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:50.469 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:50.469 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:50.470 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:50.470 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:50.470 CC lib/ftl/base/ftl_base_dev.o 00:03:50.470 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:50.470 CC lib/ftl/base/ftl_base_bdev.o 00:03:50.470 CC lib/ftl/ftl_trace.o 00:03:51.041 LIB libspdk_nbd.a 00:03:51.041 SO libspdk_nbd.so.7.0 00:03:51.041 LIB libspdk_scsi.a 00:03:51.041 SYMLINK libspdk_nbd.so 00:03:51.041 SO libspdk_scsi.so.9.0 00:03:51.302 LIB libspdk_ublk.a 00:03:51.302 SYMLINK libspdk_scsi.so 00:03:51.302 SO libspdk_ublk.so.3.0 00:03:51.302 SYMLINK libspdk_ublk.so 00:03:51.563 LIB libspdk_ftl.a 00:03:51.563 CC lib/vhost/vhost.o 00:03:51.563 CC lib/vhost/vhost_rpc.o 00:03:51.563 CC lib/vhost/vhost_scsi.o 00:03:51.563 CC lib/iscsi/conn.o 00:03:51.563 CC lib/vhost/vhost_blk.o 00:03:51.563 CC lib/iscsi/init_grp.o 00:03:51.563 CC lib/vhost/rte_vhost_user.o 00:03:51.563 CC lib/iscsi/iscsi.o 00:03:51.563 CC lib/iscsi/md5.o 00:03:51.563 CC lib/iscsi/param.o 00:03:51.563 CC lib/iscsi/portal_grp.o 00:03:51.563 CC lib/iscsi/tgt_node.o 00:03:51.563 CC lib/iscsi/iscsi_subsystem.o 00:03:51.563 CC lib/iscsi/iscsi_rpc.o 00:03:51.563 CC lib/iscsi/task.o 00:03:51.563 SO libspdk_ftl.so.9.0 00:03:51.825 SYMLINK libspdk_ftl.so 00:03:52.398 LIB libspdk_nvmf.a 00:03:52.398 SO libspdk_nvmf.so.18.1 00:03:52.398 LIB libspdk_vhost.a 00:03:52.398 SYMLINK libspdk_nvmf.so 00:03:52.398 SO libspdk_vhost.so.8.0 00:03:52.660 SYMLINK libspdk_vhost.so 00:03:52.660 LIB libspdk_iscsi.a 00:03:52.660 SO libspdk_iscsi.so.8.0 00:03:52.921 SYMLINK libspdk_iscsi.so 00:03:53.495 CC module/env_dpdk/env_dpdk_rpc.o 00:03:53.495 CC module/keyring/linux/keyring_rpc.o 00:03:53.495 CC module/keyring/linux/keyring.o 00:03:53.495 LIB libspdk_env_dpdk_rpc.a 00:03:53.495 CC module/keyring/file/keyring.o 00:03:53.495 CC module/keyring/file/keyring_rpc.o 00:03:53.495 CC module/blob/bdev/blob_bdev.o 00:03:53.495 CC module/accel/dsa/accel_dsa.o 00:03:53.495 CC module/accel/dsa/accel_dsa_rpc.o 00:03:53.495 CC module/sock/posix/posix.o 00:03:53.495 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:53.495 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:53.757 CC module/accel/ioat/accel_ioat.o 00:03:53.757 CC module/accel/error/accel_error.o 00:03:53.757 CC module/accel/ioat/accel_ioat_rpc.o 00:03:53.757 CC module/scheduler/gscheduler/gscheduler.o 00:03:53.757 CC module/accel/iaa/accel_iaa.o 00:03:53.757 CC module/accel/error/accel_error_rpc.o 00:03:53.757 CC module/accel/iaa/accel_iaa_rpc.o 00:03:53.757 SO libspdk_env_dpdk_rpc.so.6.0 00:03:53.757 SYMLINK libspdk_env_dpdk_rpc.so 00:03:53.757 LIB libspdk_keyring_linux.a 00:03:53.757 LIB libspdk_keyring_file.a 00:03:53.757 LIB libspdk_scheduler_dpdk_governor.a 00:03:53.757 LIB libspdk_scheduler_gscheduler.a 00:03:53.757 SO libspdk_keyring_file.so.1.0 00:03:53.757 SO libspdk_keyring_linux.so.1.0 00:03:53.757 LIB libspdk_accel_dsa.a 00:03:53.757 LIB libspdk_accel_ioat.a 00:03:53.757 LIB libspdk_scheduler_dynamic.a 00:03:53.757 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:53.757 SO libspdk_scheduler_gscheduler.so.4.0 00:03:53.757 LIB libspdk_accel_error.a 00:03:53.757 LIB libspdk_accel_iaa.a 00:03:53.757 SO libspdk_scheduler_dynamic.so.4.0 00:03:53.757 SO libspdk_accel_dsa.so.5.0 00:03:53.757 SO libspdk_accel_error.so.2.0 00:03:53.757 SO libspdk_accel_ioat.so.6.0 00:03:53.757 SYMLINK libspdk_keyring_file.so 00:03:54.017 LIB libspdk_blob_bdev.a 00:03:54.017 SO libspdk_accel_iaa.so.3.0 00:03:54.017 SYMLINK libspdk_keyring_linux.so 00:03:54.017 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.017 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:54.017 SO libspdk_blob_bdev.so.11.0 00:03:54.017 SYMLINK libspdk_scheduler_dynamic.so 00:03:54.017 SYMLINK libspdk_accel_ioat.so 00:03:54.017 SYMLINK libspdk_accel_error.so 00:03:54.017 SYMLINK libspdk_accel_dsa.so 00:03:54.017 SYMLINK libspdk_accel_iaa.so 00:03:54.017 SYMLINK libspdk_blob_bdev.so 00:03:54.278 LIB libspdk_sock_posix.a 00:03:54.278 SO libspdk_sock_posix.so.6.0 00:03:54.539 SYMLINK libspdk_sock_posix.so 00:03:54.539 CC module/bdev/aio/bdev_aio.o 00:03:54.539 CC module/bdev/nvme/bdev_nvme.o 00:03:54.539 CC module/bdev/aio/bdev_aio_rpc.o 00:03:54.539 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:54.539 CC module/bdev/iscsi/bdev_iscsi.o 00:03:54.539 CC module/bdev/gpt/gpt.o 00:03:54.539 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:54.539 CC module/bdev/nvme/bdev_mdns_client.o 00:03:54.539 CC module/bdev/nvme/nvme_rpc.o 00:03:54.539 CC module/bdev/null/bdev_null.o 00:03:54.539 CC module/bdev/delay/vbdev_delay.o 00:03:54.539 CC module/bdev/nvme/vbdev_opal.o 00:03:54.539 CC module/bdev/gpt/vbdev_gpt.o 00:03:54.539 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:54.539 CC module/bdev/null/bdev_null_rpc.o 00:03:54.539 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:54.539 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:54.539 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:54.539 CC module/bdev/malloc/bdev_malloc.o 00:03:54.539 CC module/blobfs/bdev/blobfs_bdev.o 00:03:54.539 CC module/bdev/error/vbdev_error.o 00:03:54.539 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:54.539 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.539 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:54.539 CC module/bdev/error/vbdev_error_rpc.o 00:03:54.539 CC module/bdev/split/vbdev_split.o 00:03:54.539 CC module/bdev/raid/bdev_raid.o 00:03:54.539 CC module/bdev/split/vbdev_split_rpc.o 00:03:54.539 CC module/bdev/ftl/bdev_ftl.o 00:03:54.539 CC module/bdev/raid/bdev_raid_rpc.o 00:03:54.539 CC module/bdev/passthru/vbdev_passthru.o 00:03:54.539 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:54.539 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:54.539 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.539 CC module/bdev/raid/bdev_raid_sb.o 00:03:54.539 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:54.539 CC module/bdev/raid/raid0.o 00:03:54.539 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:54.539 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:54.539 CC module/bdev/raid/raid1.o 00:03:54.539 CC module/bdev/raid/concat.o 00:03:54.539 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:54.798 LIB libspdk_blobfs_bdev.a 00:03:54.798 LIB libspdk_bdev_null.a 00:03:54.798 SO libspdk_blobfs_bdev.so.6.0 00:03:54.798 SO libspdk_bdev_null.so.6.0 00:03:54.798 LIB libspdk_bdev_error.a 00:03:54.798 LIB libspdk_bdev_split.a 00:03:54.798 LIB libspdk_bdev_malloc.a 00:03:54.798 LIB libspdk_bdev_gpt.a 00:03:54.798 SO libspdk_bdev_error.so.6.0 00:03:54.798 LIB libspdk_bdev_ftl.a 00:03:54.798 SYMLINK libspdk_blobfs_bdev.so 00:03:54.798 SO libspdk_bdev_malloc.so.6.0 00:03:54.798 SO libspdk_bdev_split.so.6.0 00:03:54.798 LIB libspdk_bdev_aio.a 00:03:54.798 LIB libspdk_bdev_passthru.a 00:03:54.798 SO libspdk_bdev_gpt.so.6.0 00:03:54.798 LIB libspdk_bdev_zone_block.a 00:03:54.798 LIB libspdk_bdev_delay.a 00:03:54.798 SYMLINK libspdk_bdev_error.so 00:03:54.798 SYMLINK libspdk_bdev_null.so 00:03:54.798 SO libspdk_bdev_ftl.so.6.0 00:03:54.798 LIB libspdk_bdev_iscsi.a 00:03:54.798 SO libspdk_bdev_aio.so.6.0 00:03:54.798 SO libspdk_bdev_zone_block.so.6.0 00:03:54.798 SO libspdk_bdev_passthru.so.6.0 00:03:54.798 SO libspdk_bdev_delay.so.6.0 00:03:55.059 SYMLINK libspdk_bdev_gpt.so 00:03:55.059 SYMLINK libspdk_bdev_malloc.so 00:03:55.059 SYMLINK libspdk_bdev_split.so 00:03:55.059 SO libspdk_bdev_iscsi.so.6.0 00:03:55.059 SYMLINK libspdk_bdev_ftl.so 00:03:55.059 SYMLINK libspdk_bdev_aio.so 00:03:55.059 SYMLINK libspdk_bdev_passthru.so 00:03:55.059 SYMLINK libspdk_bdev_delay.so 00:03:55.059 SYMLINK libspdk_bdev_zone_block.so 00:03:55.059 SYMLINK libspdk_bdev_iscsi.so 00:03:55.059 LIB libspdk_bdev_lvol.a 00:03:55.059 LIB libspdk_bdev_virtio.a 00:03:55.059 SO libspdk_bdev_lvol.so.6.0 00:03:55.059 SO libspdk_bdev_virtio.so.6.0 00:03:55.059 SYMLINK libspdk_bdev_lvol.so 00:03:55.319 SYMLINK libspdk_bdev_virtio.so 00:03:55.319 LIB libspdk_bdev_raid.a 00:03:55.579 SO libspdk_bdev_raid.so.6.0 00:03:55.579 SYMLINK libspdk_bdev_raid.so 00:03:56.519 LIB libspdk_bdev_nvme.a 00:03:56.519 SO libspdk_bdev_nvme.so.7.0 00:03:56.519 SYMLINK libspdk_bdev_nvme.so 00:03:57.463 CC module/event/subsystems/keyring/keyring.o 00:03:57.463 CC module/event/subsystems/iobuf/iobuf.o 00:03:57.463 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:57.463 CC module/event/subsystems/vmd/vmd.o 00:03:57.463 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:57.463 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:57.463 CC module/event/subsystems/sock/sock.o 00:03:57.463 CC module/event/subsystems/scheduler/scheduler.o 00:03:57.463 LIB libspdk_event_keyring.a 00:03:57.463 LIB libspdk_event_vhost_blk.a 00:03:57.463 LIB libspdk_event_vmd.a 00:03:57.463 LIB libspdk_event_sock.a 00:03:57.463 LIB libspdk_event_scheduler.a 00:03:57.463 LIB libspdk_event_iobuf.a 00:03:57.463 SO libspdk_event_keyring.so.1.0 00:03:57.463 SO libspdk_event_vhost_blk.so.3.0 00:03:57.463 SO libspdk_event_vmd.so.6.0 00:03:57.463 SO libspdk_event_sock.so.5.0 00:03:57.463 SO libspdk_event_scheduler.so.4.0 00:03:57.463 SO libspdk_event_iobuf.so.3.0 00:03:57.463 SYMLINK libspdk_event_keyring.so 00:03:57.463 SYMLINK libspdk_event_vhost_blk.so 00:03:57.463 SYMLINK libspdk_event_vmd.so 00:03:57.463 SYMLINK libspdk_event_sock.so 00:03:57.463 SYMLINK libspdk_event_scheduler.so 00:03:57.463 SYMLINK libspdk_event_iobuf.so 00:03:58.034 CC module/event/subsystems/accel/accel.o 00:03:58.034 LIB libspdk_event_accel.a 00:03:58.034 SO libspdk_event_accel.so.6.0 00:03:58.034 SYMLINK libspdk_event_accel.so 00:03:58.605 CC module/event/subsystems/bdev/bdev.o 00:03:58.605 LIB libspdk_event_bdev.a 00:03:58.605 SO libspdk_event_bdev.so.6.0 00:03:58.866 SYMLINK libspdk_event_bdev.so 00:03:59.126 CC module/event/subsystems/scsi/scsi.o 00:03:59.126 CC module/event/subsystems/nbd/nbd.o 00:03:59.126 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:59.126 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:59.126 CC module/event/subsystems/ublk/ublk.o 00:03:59.126 LIB libspdk_event_scsi.a 00:03:59.126 LIB libspdk_event_ublk.a 00:03:59.126 LIB libspdk_event_nbd.a 00:03:59.387 SO libspdk_event_scsi.so.6.0 00:03:59.387 SO libspdk_event_ublk.so.3.0 00:03:59.387 SO libspdk_event_nbd.so.6.0 00:03:59.387 LIB libspdk_event_nvmf.a 00:03:59.387 SYMLINK libspdk_event_scsi.so 00:03:59.387 SYMLINK libspdk_event_ublk.so 00:03:59.387 SYMLINK libspdk_event_nbd.so 00:03:59.387 SO libspdk_event_nvmf.so.6.0 00:03:59.387 SYMLINK libspdk_event_nvmf.so 00:03:59.646 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:59.646 CC module/event/subsystems/iscsi/iscsi.o 00:03:59.907 LIB libspdk_event_vhost_scsi.a 00:03:59.907 LIB libspdk_event_iscsi.a 00:03:59.907 SO libspdk_event_vhost_scsi.so.3.0 00:03:59.907 SO libspdk_event_iscsi.so.6.0 00:03:59.907 SYMLINK libspdk_event_vhost_scsi.so 00:03:59.907 SYMLINK libspdk_event_iscsi.so 00:04:00.229 SO libspdk.so.6.0 00:04:00.229 SYMLINK libspdk.so 00:04:00.527 CXX app/trace/trace.o 00:04:00.527 CC app/spdk_lspci/spdk_lspci.o 00:04:00.527 TEST_HEADER include/spdk/accel.h 00:04:00.527 TEST_HEADER include/spdk/accel_module.h 00:04:00.527 TEST_HEADER include/spdk/barrier.h 00:04:00.527 TEST_HEADER include/spdk/assert.h 00:04:00.527 CC app/spdk_top/spdk_top.o 00:04:00.527 TEST_HEADER include/spdk/base64.h 00:04:00.527 TEST_HEADER include/spdk/bdev.h 00:04:00.527 CC app/trace_record/trace_record.o 00:04:00.527 CC app/spdk_nvme_discover/discovery_aer.o 00:04:00.527 CC app/spdk_nvme_identify/identify.o 00:04:00.527 TEST_HEADER include/spdk/bdev_module.h 00:04:00.527 TEST_HEADER include/spdk/bit_array.h 00:04:00.527 TEST_HEADER include/spdk/bdev_zone.h 00:04:00.527 TEST_HEADER include/spdk/blob_bdev.h 00:04:00.527 CC test/rpc_client/rpc_client_test.o 00:04:00.527 TEST_HEADER include/spdk/bit_pool.h 00:04:00.527 CC app/spdk_nvme_perf/perf.o 00:04:00.527 TEST_HEADER include/spdk/blobfs.h 00:04:00.527 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:00.527 TEST_HEADER include/spdk/blob.h 00:04:00.527 TEST_HEADER include/spdk/conf.h 00:04:00.527 TEST_HEADER include/spdk/config.h 00:04:00.527 TEST_HEADER include/spdk/cpuset.h 00:04:00.527 TEST_HEADER include/spdk/crc32.h 00:04:00.527 TEST_HEADER include/spdk/crc64.h 00:04:00.527 TEST_HEADER include/spdk/crc16.h 00:04:00.527 CC app/nvmf_tgt/nvmf_main.o 00:04:00.527 TEST_HEADER include/spdk/dif.h 00:04:00.527 TEST_HEADER include/spdk/endian.h 00:04:00.528 TEST_HEADER include/spdk/env_dpdk.h 00:04:00.528 TEST_HEADER include/spdk/dma.h 00:04:00.528 TEST_HEADER include/spdk/env.h 00:04:00.528 TEST_HEADER include/spdk/event.h 00:04:00.528 TEST_HEADER include/spdk/fd.h 00:04:00.528 TEST_HEADER include/spdk/fd_group.h 00:04:00.528 TEST_HEADER include/spdk/file.h 00:04:00.528 TEST_HEADER include/spdk/ftl.h 00:04:00.528 TEST_HEADER include/spdk/hexlify.h 00:04:00.528 TEST_HEADER include/spdk/gpt_spec.h 00:04:00.528 TEST_HEADER include/spdk/idxd.h 00:04:00.528 TEST_HEADER include/spdk/histogram_data.h 00:04:00.528 TEST_HEADER include/spdk/idxd_spec.h 00:04:00.528 TEST_HEADER include/spdk/ioat.h 00:04:00.528 CC app/spdk_dd/spdk_dd.o 00:04:00.528 TEST_HEADER include/spdk/init.h 00:04:00.528 TEST_HEADER include/spdk/ioat_spec.h 00:04:00.528 TEST_HEADER include/spdk/iscsi_spec.h 00:04:00.528 TEST_HEADER include/spdk/json.h 00:04:00.807 TEST_HEADER include/spdk/jsonrpc.h 00:04:00.807 CC app/spdk_tgt/spdk_tgt.o 00:04:00.807 TEST_HEADER include/spdk/likely.h 00:04:00.807 TEST_HEADER include/spdk/keyring.h 00:04:00.807 TEST_HEADER include/spdk/keyring_module.h 00:04:00.807 TEST_HEADER include/spdk/log.h 00:04:00.807 CC app/vhost/vhost.o 00:04:00.807 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:00.807 TEST_HEADER include/spdk/lvol.h 00:04:00.807 TEST_HEADER include/spdk/memory.h 00:04:00.807 TEST_HEADER include/spdk/mmio.h 00:04:00.807 CC app/iscsi_tgt/iscsi_tgt.o 00:04:00.807 TEST_HEADER include/spdk/nbd.h 00:04:00.807 TEST_HEADER include/spdk/notify.h 00:04:00.807 TEST_HEADER include/spdk/nvme_intel.h 00:04:00.807 TEST_HEADER include/spdk/nvme.h 00:04:00.807 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:00.807 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:00.807 TEST_HEADER include/spdk/nvme_spec.h 00:04:00.807 TEST_HEADER include/spdk/nvme_zns.h 00:04:00.807 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:00.807 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:00.807 TEST_HEADER include/spdk/nvmf.h 00:04:00.807 TEST_HEADER include/spdk/nvmf_spec.h 00:04:00.807 TEST_HEADER include/spdk/nvmf_transport.h 00:04:00.807 TEST_HEADER include/spdk/opal.h 00:04:00.807 TEST_HEADER include/spdk/opal_spec.h 00:04:00.807 TEST_HEADER include/spdk/pci_ids.h 00:04:00.807 TEST_HEADER include/spdk/pipe.h 00:04:00.807 TEST_HEADER include/spdk/queue.h 00:04:00.807 TEST_HEADER include/spdk/reduce.h 00:04:00.807 TEST_HEADER include/spdk/rpc.h 00:04:00.807 TEST_HEADER include/spdk/scheduler.h 00:04:00.807 TEST_HEADER include/spdk/scsi.h 00:04:00.807 TEST_HEADER include/spdk/scsi_spec.h 00:04:00.807 TEST_HEADER include/spdk/sock.h 00:04:00.807 TEST_HEADER include/spdk/stdinc.h 00:04:00.807 TEST_HEADER include/spdk/string.h 00:04:00.807 TEST_HEADER include/spdk/thread.h 00:04:00.807 TEST_HEADER include/spdk/trace.h 00:04:00.807 TEST_HEADER include/spdk/trace_parser.h 00:04:00.807 TEST_HEADER include/spdk/tree.h 00:04:00.807 TEST_HEADER include/spdk/ublk.h 00:04:00.807 TEST_HEADER include/spdk/util.h 00:04:00.807 TEST_HEADER include/spdk/version.h 00:04:00.807 TEST_HEADER include/spdk/uuid.h 00:04:00.807 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:00.807 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:00.807 TEST_HEADER include/spdk/vhost.h 00:04:00.807 TEST_HEADER include/spdk/vmd.h 00:04:00.807 TEST_HEADER include/spdk/xor.h 00:04:00.807 TEST_HEADER include/spdk/zipf.h 00:04:00.807 CXX test/cpp_headers/accel.o 00:04:00.807 CXX test/cpp_headers/accel_module.o 00:04:00.807 CXX test/cpp_headers/assert.o 00:04:00.807 CXX test/cpp_headers/barrier.o 00:04:00.807 CXX test/cpp_headers/bdev.o 00:04:00.807 CXX test/cpp_headers/base64.o 00:04:00.807 CXX test/cpp_headers/bdev_zone.o 00:04:00.807 CXX test/cpp_headers/bdev_module.o 00:04:00.807 CXX test/cpp_headers/blob_bdev.o 00:04:00.807 CXX test/cpp_headers/bit_pool.o 00:04:00.807 CXX test/cpp_headers/bit_array.o 00:04:00.807 CXX test/cpp_headers/blobfs_bdev.o 00:04:00.807 CXX test/cpp_headers/blobfs.o 00:04:00.807 CXX test/cpp_headers/blob.o 00:04:00.807 CXX test/cpp_headers/config.o 00:04:00.807 CXX test/cpp_headers/cpuset.o 00:04:00.807 CXX test/cpp_headers/conf.o 00:04:00.807 CXX test/cpp_headers/crc16.o 00:04:00.807 CXX test/cpp_headers/dif.o 00:04:00.807 CXX test/cpp_headers/crc32.o 00:04:00.807 CXX test/cpp_headers/crc64.o 00:04:00.807 CXX test/cpp_headers/dma.o 00:04:00.807 CXX test/cpp_headers/endian.o 00:04:00.807 CXX test/cpp_headers/env_dpdk.o 00:04:00.807 CXX test/cpp_headers/env.o 00:04:00.807 CXX test/cpp_headers/event.o 00:04:00.807 CXX test/cpp_headers/fd.o 00:04:00.807 CXX test/cpp_headers/file.o 00:04:00.807 CXX test/cpp_headers/fd_group.o 00:04:00.807 CXX test/cpp_headers/ftl.o 00:04:00.807 CXX test/cpp_headers/hexlify.o 00:04:00.807 CXX test/cpp_headers/histogram_data.o 00:04:00.807 CXX test/cpp_headers/gpt_spec.o 00:04:00.807 CXX test/cpp_headers/idxd.o 00:04:00.807 CXX test/cpp_headers/idxd_spec.o 00:04:00.807 CXX test/cpp_headers/init.o 00:04:00.807 CXX test/cpp_headers/ioat.o 00:04:00.807 CXX test/cpp_headers/iscsi_spec.o 00:04:00.807 CXX test/cpp_headers/json.o 00:04:00.807 CXX test/cpp_headers/ioat_spec.o 00:04:00.807 CXX test/cpp_headers/jsonrpc.o 00:04:00.807 CXX test/cpp_headers/keyring.o 00:04:00.807 CXX test/cpp_headers/keyring_module.o 00:04:00.807 CXX test/cpp_headers/likely.o 00:04:00.807 CXX test/cpp_headers/log.o 00:04:00.807 CXX test/cpp_headers/lvol.o 00:04:00.807 CXX test/cpp_headers/mmio.o 00:04:00.807 CXX test/cpp_headers/memory.o 00:04:00.807 CXX test/cpp_headers/nbd.o 00:04:00.807 CXX test/cpp_headers/nvme.o 00:04:00.807 CXX test/cpp_headers/notify.o 00:04:00.807 CXX test/cpp_headers/nvme_intel.o 00:04:00.807 CXX test/cpp_headers/nvme_ocssd.o 00:04:00.807 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:00.807 CXX test/cpp_headers/nvme_spec.o 00:04:00.807 CXX test/cpp_headers/nvme_zns.o 00:04:00.807 CXX test/cpp_headers/nvmf_cmd.o 00:04:00.807 CXX test/cpp_headers/nvmf.o 00:04:00.807 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:00.807 CXX test/cpp_headers/nvmf_spec.o 00:04:00.807 CXX test/cpp_headers/nvmf_transport.o 00:04:00.807 CXX test/cpp_headers/opal.o 00:04:00.807 CXX test/cpp_headers/opal_spec.o 00:04:00.807 CXX test/cpp_headers/pci_ids.o 00:04:00.807 CXX test/cpp_headers/queue.o 00:04:00.807 CXX test/cpp_headers/pipe.o 00:04:00.807 CXX test/cpp_headers/reduce.o 00:04:00.807 CXX test/cpp_headers/rpc.o 00:04:00.807 CXX test/cpp_headers/scheduler.o 00:04:00.807 CC examples/vmd/lsvmd/lsvmd.o 00:04:00.808 CC test/app/histogram_perf/histogram_perf.o 00:04:00.808 CXX test/cpp_headers/scsi.o 00:04:00.808 CC examples/idxd/perf/perf.o 00:04:00.808 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:00.808 CC examples/nvme/hotplug/hotplug.o 00:04:00.808 CC examples/ioat/verify/verify.o 00:04:00.808 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:00.808 CC examples/ioat/perf/perf.o 00:04:00.808 CC examples/accel/perf/accel_perf.o 00:04:00.808 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:00.808 CC examples/sock/hello_world/hello_sock.o 00:04:00.808 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:00.808 CC test/event/reactor/reactor.o 00:04:00.808 CC test/env/vtophys/vtophys.o 00:04:00.808 CC examples/nvme/arbitration/arbitration.o 00:04:00.808 CC test/event/event_perf/event_perf.o 00:04:00.808 CC examples/vmd/led/led.o 00:04:00.808 CC test/app/jsoncat/jsoncat.o 00:04:00.808 CC examples/nvme/reconnect/reconnect.o 00:04:00.808 CC test/nvme/err_injection/err_injection.o 00:04:00.808 CXX test/cpp_headers/scsi_spec.o 00:04:00.808 CC examples/nvme/abort/abort.o 00:04:00.808 CC test/app/stub/stub.o 00:04:00.808 CC test/env/pci/pci_ut.o 00:04:00.808 CC examples/nvme/hello_world/hello_world.o 00:04:00.808 CC app/fio/nvme/fio_plugin.o 00:04:00.808 CC test/env/memory/memory_ut.o 00:04:00.808 CC test/nvme/compliance/nvme_compliance.o 00:04:00.808 CC test/nvme/sgl/sgl.o 00:04:00.808 CC examples/util/zipf/zipf.o 00:04:00.808 CC test/nvme/reset/reset.o 00:04:00.808 CC test/thread/poller_perf/poller_perf.o 00:04:00.808 CC test/nvme/aer/aer.o 00:04:00.808 CC test/nvme/cuse/cuse.o 00:04:00.808 CC test/nvme/boot_partition/boot_partition.o 00:04:00.808 CC test/event/reactor_perf/reactor_perf.o 00:04:00.808 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:00.808 CC test/nvme/fused_ordering/fused_ordering.o 00:04:00.808 CC test/nvme/fdp/fdp.o 00:04:00.808 CC test/nvme/startup/startup.o 00:04:00.808 CC test/nvme/overhead/overhead.o 00:04:00.808 CC test/app/bdev_svc/bdev_svc.o 00:04:00.808 CC test/nvme/reserve/reserve.o 00:04:00.808 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.808 CC test/nvme/e2edp/nvme_dp.o 00:04:00.808 CC test/dma/test_dma/test_dma.o 00:04:00.808 CXX test/cpp_headers/sock.o 00:04:00.808 CC test/nvme/connect_stress/connect_stress.o 00:04:00.808 CC test/event/app_repeat/app_repeat.o 00:04:00.808 CC test/nvme/simple_copy/simple_copy.o 00:04:01.092 CC examples/blob/cli/blobcli.o 00:04:01.092 CC test/accel/dif/dif.o 00:04:01.092 CC test/blobfs/mkfs/mkfs.o 00:04:01.092 CC test/bdev/bdevio/bdevio.o 00:04:01.092 CC examples/thread/thread/thread_ex.o 00:04:01.092 LINK spdk_lspci 00:04:01.092 CC examples/bdev/bdevperf/bdevperf.o 00:04:01.092 CC examples/blob/hello_world/hello_blob.o 00:04:01.092 CC examples/nvmf/nvmf/nvmf.o 00:04:01.092 CC test/event/scheduler/scheduler.o 00:04:01.092 CC app/fio/bdev/fio_plugin.o 00:04:01.092 LINK rpc_client_test 00:04:01.092 LINK nvmf_tgt 00:04:01.092 LINK spdk_nvme_discover 00:04:01.365 LINK interrupt_tgt 00:04:01.365 CC test/env/mem_callbacks/mem_callbacks.o 00:04:01.365 LINK vhost 00:04:01.365 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:01.365 LINK iscsi_tgt 00:04:01.365 LINK spdk_tgt 00:04:01.365 CC test/lvol/esnap/esnap.o 00:04:01.365 LINK lsvmd 00:04:01.365 LINK event_perf 00:04:01.365 LINK spdk_trace_record 00:04:01.365 LINK vtophys 00:04:01.365 LINK env_dpdk_post_init 00:04:01.365 LINK reactor 00:04:01.365 LINK cmb_copy 00:04:01.625 LINK reactor_perf 00:04:01.625 LINK err_injection 00:04:01.625 LINK jsoncat 00:04:01.625 LINK histogram_perf 00:04:01.625 LINK led 00:04:01.625 LINK connect_stress 00:04:01.625 LINK app_repeat 00:04:01.625 LINK zipf 00:04:01.625 LINK ioat_perf 00:04:01.625 LINK poller_perf 00:04:01.625 LINK boot_partition 00:04:01.625 LINK pmr_persistence 00:04:01.625 CXX test/cpp_headers/string.o 00:04:01.625 CXX test/cpp_headers/stdinc.o 00:04:01.625 LINK verify 00:04:01.625 LINK doorbell_aers 00:04:01.625 CXX test/cpp_headers/thread.o 00:04:01.625 CXX test/cpp_headers/trace.o 00:04:01.625 CXX test/cpp_headers/trace_parser.o 00:04:01.625 LINK bdev_svc 00:04:01.625 CXX test/cpp_headers/tree.o 00:04:01.625 CXX test/cpp_headers/ublk.o 00:04:01.625 CXX test/cpp_headers/util.o 00:04:01.625 CXX test/cpp_headers/uuid.o 00:04:01.625 CXX test/cpp_headers/version.o 00:04:01.625 LINK stub 00:04:01.625 CXX test/cpp_headers/vfio_user_pci.o 00:04:01.625 CXX test/cpp_headers/vfio_user_spec.o 00:04:01.625 LINK startup 00:04:01.625 CXX test/cpp_headers/vhost.o 00:04:01.625 CXX test/cpp_headers/vmd.o 00:04:01.625 CXX test/cpp_headers/xor.o 00:04:01.625 CXX test/cpp_headers/zipf.o 00:04:01.625 LINK fused_ordering 00:04:01.625 LINK hello_bdev 00:04:01.625 LINK hello_sock 00:04:01.625 LINK hotplug 00:04:01.625 LINK spdk_dd 00:04:01.625 LINK hello_world 00:04:01.625 LINK reset 00:04:01.625 LINK simple_copy 00:04:01.625 LINK reserve 00:04:01.625 LINK nvme_dp 00:04:01.625 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:01.625 LINK hello_blob 00:04:01.625 LINK mkfs 00:04:01.625 LINK thread 00:04:01.625 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:01.625 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:01.625 LINK scheduler 00:04:01.625 LINK aer 00:04:01.625 LINK idxd_perf 00:04:01.625 LINK sgl 00:04:01.625 LINK nvme_compliance 00:04:01.887 LINK nvmf 00:04:01.887 LINK overhead 00:04:01.887 LINK spdk_trace 00:04:01.887 LINK arbitration 00:04:01.887 LINK reconnect 00:04:01.887 LINK fdp 00:04:01.887 LINK abort 00:04:01.887 LINK test_dma 00:04:01.887 LINK bdevio 00:04:01.887 LINK pci_ut 00:04:01.887 LINK dif 00:04:01.887 LINK accel_perf 00:04:01.887 LINK blobcli 00:04:01.887 LINK nvme_fuzz 00:04:01.887 LINK spdk_nvme 00:04:01.887 LINK nvme_manage 00:04:01.887 LINK spdk_bdev 00:04:02.148 LINK spdk_nvme_perf 00:04:02.148 LINK vhost_fuzz 00:04:02.148 LINK spdk_top 00:04:02.148 LINK spdk_nvme_identify 00:04:02.148 LINK mem_callbacks 00:04:02.148 LINK bdevperf 00:04:02.410 LINK memory_ut 00:04:02.672 LINK cuse 00:04:03.245 LINK iscsi_fuzz 00:04:05.796 LINK esnap 00:04:06.057 00:04:06.057 real 0m49.451s 00:04:06.057 user 6m33.159s 00:04:06.057 sys 4m30.398s 00:04:06.057 11:12:35 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:04:06.057 11:12:35 make -- common/autotest_common.sh@10 -- $ set +x 00:04:06.057 ************************************ 00:04:06.057 END TEST make 00:04:06.057 ************************************ 00:04:06.320 11:12:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:06.320 11:12:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:06.320 11:12:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:06.320 11:12:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.320 11:12:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:06.320 11:12:35 -- pm/common@44 -- $ pid=3282909 00:04:06.320 11:12:35 -- pm/common@50 -- $ kill -TERM 3282909 00:04:06.320 11:12:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.320 11:12:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:06.320 11:12:35 -- pm/common@44 -- $ pid=3282910 00:04:06.320 11:12:35 -- pm/common@50 -- $ kill -TERM 3282910 00:04:06.320 11:12:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.320 11:12:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:06.320 11:12:35 -- pm/common@44 -- $ pid=3282912 00:04:06.320 11:12:35 -- pm/common@50 -- $ kill -TERM 3282912 00:04:06.320 11:12:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.320 11:12:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:06.320 11:12:35 -- pm/common@44 -- $ pid=3282937 00:04:06.320 11:12:35 -- pm/common@50 -- $ sudo -E kill -TERM 3282937 00:04:06.320 11:12:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.320 11:12:35 -- nvmf/common.sh@7 -- # uname -s 00:04:06.320 11:12:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.320 11:12:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.320 11:12:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.320 11:12:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.320 11:12:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.320 11:12:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.320 11:12:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.320 11:12:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.320 11:12:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.320 11:12:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.320 11:12:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:06.320 11:12:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:06.320 11:12:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.320 11:12:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.320 11:12:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:06.320 11:12:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.320 11:12:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:06.320 11:12:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.320 11:12:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.320 11:12:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.320 11:12:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.320 11:12:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.320 11:12:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.320 11:12:35 -- paths/export.sh@5 -- # export PATH 00:04:06.320 11:12:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.320 11:12:35 -- nvmf/common.sh@47 -- # : 0 00:04:06.320 11:12:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:06.320 11:12:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:06.320 11:12:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.320 11:12:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.320 11:12:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.320 11:12:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:06.320 11:12:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:06.320 11:12:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:06.320 11:12:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:06.320 11:12:35 -- spdk/autotest.sh@32 -- # uname -s 00:04:06.320 11:12:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:06.320 11:12:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:06.320 11:12:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:06.320 11:12:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:06.320 11:12:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:06.320 11:12:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:06.320 11:12:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:06.320 11:12:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:06.320 11:12:35 -- spdk/autotest.sh@48 -- # udevadm_pid=3345697 00:04:06.320 11:12:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:06.320 11:12:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:06.320 11:12:35 -- pm/common@17 -- # local monitor 00:04:06.320 11:12:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.320 11:12:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.320 11:12:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.320 11:12:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.320 11:12:35 -- pm/common@21 -- # date +%s 00:04:06.320 11:12:35 -- pm/common@25 -- # sleep 1 00:04:06.320 11:12:35 -- pm/common@21 -- # date +%s 00:04:06.320 11:12:35 -- pm/common@21 -- # date +%s 00:04:06.320 11:12:35 -- pm/common@21 -- # date +%s 00:04:06.320 11:12:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718010755 00:04:06.321 11:12:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718010755 00:04:06.321 11:12:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718010755 00:04:06.321 11:12:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718010755 00:04:06.321 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718010755_collect-vmstat.pm.log 00:04:06.583 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718010755_collect-cpu-load.pm.log 00:04:06.583 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718010755_collect-cpu-temp.pm.log 00:04:06.583 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718010755_collect-bmc-pm.bmc.pm.log 00:04:07.527 11:12:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:07.527 11:12:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:07.527 11:12:36 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:07.527 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:04:07.527 11:12:36 -- spdk/autotest.sh@59 -- # create_test_list 00:04:07.527 11:12:36 -- common/autotest_common.sh@747 -- # xtrace_disable 00:04:07.527 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:04:07.527 11:12:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:04:07.527 11:12:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:07.527 11:12:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:07.527 11:12:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:04:07.527 11:12:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:07.527 11:12:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:07.527 11:12:36 -- common/autotest_common.sh@1454 -- # uname 00:04:07.527 11:12:36 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:04:07.527 11:12:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:07.527 11:12:36 -- common/autotest_common.sh@1474 -- # uname 00:04:07.527 11:12:36 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:04:07.527 11:12:36 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:07.527 11:12:36 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:07.527 11:12:36 -- spdk/autotest.sh@72 -- # hash lcov 00:04:07.527 11:12:36 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:07.527 11:12:36 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:07.527 --rc lcov_branch_coverage=1 00:04:07.527 --rc lcov_function_coverage=1 00:04:07.527 --rc genhtml_branch_coverage=1 00:04:07.527 --rc genhtml_function_coverage=1 00:04:07.527 --rc genhtml_legend=1 00:04:07.527 --rc geninfo_all_blocks=1 00:04:07.527 ' 00:04:07.527 11:12:36 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:07.527 --rc lcov_branch_coverage=1 00:04:07.527 --rc lcov_function_coverage=1 00:04:07.527 --rc genhtml_branch_coverage=1 00:04:07.527 --rc genhtml_function_coverage=1 00:04:07.527 --rc genhtml_legend=1 00:04:07.527 --rc geninfo_all_blocks=1 00:04:07.527 ' 00:04:07.527 11:12:36 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:07.527 --rc lcov_branch_coverage=1 00:04:07.527 --rc lcov_function_coverage=1 00:04:07.527 --rc genhtml_branch_coverage=1 00:04:07.527 --rc genhtml_function_coverage=1 00:04:07.527 --rc genhtml_legend=1 00:04:07.527 --rc geninfo_all_blocks=1 00:04:07.527 --no-external' 00:04:07.527 11:12:36 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:07.527 --rc lcov_branch_coverage=1 00:04:07.527 --rc lcov_function_coverage=1 00:04:07.527 --rc genhtml_branch_coverage=1 00:04:07.527 --rc genhtml_function_coverage=1 00:04:07.527 --rc genhtml_legend=1 00:04:07.527 --rc geninfo_all_blocks=1 00:04:07.527 --no-external' 00:04:07.527 11:12:36 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:07.527 lcov: LCOV version 1.14 00:04:07.527 11:12:36 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:19.775 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:19.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:34.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:34.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:34.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:34.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:35.694 11:13:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:35.694 11:13:04 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:35.694 11:13:04 -- common/autotest_common.sh@10 -- # set +x 00:04:35.694 11:13:04 -- spdk/autotest.sh@91 -- # rm -f 00:04:35.694 11:13:04 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.998 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:38.998 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:38.998 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:39.259 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:39.259 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:39.259 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:39.259 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:39.259 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:39.520 11:13:08 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:39.520 11:13:08 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:39.520 11:13:08 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:39.520 11:13:08 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:39.520 11:13:08 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:39.520 11:13:08 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:39.520 11:13:08 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:39.520 11:13:08 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.520 11:13:08 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:39.520 11:13:08 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:39.520 11:13:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.520 11:13:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:39.520 11:13:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:39.520 11:13:08 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:39.520 11:13:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.520 No valid GPT data, bailing 00:04:39.520 11:13:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.520 11:13:08 -- scripts/common.sh@391 -- # pt= 00:04:39.520 11:13:08 -- scripts/common.sh@392 -- # return 1 00:04:39.520 11:13:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.520 1+0 records in 00:04:39.520 1+0 records out 00:04:39.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465001 s, 225 MB/s 00:04:39.520 11:13:08 -- spdk/autotest.sh@118 -- # sync 00:04:39.520 11:13:08 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.520 11:13:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.520 11:13:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:47.661 11:13:16 -- spdk/autotest.sh@124 -- # uname -s 00:04:47.661 11:13:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:47.661 11:13:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:47.661 11:13:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:47.661 11:13:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:47.661 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:04:47.661 ************************************ 00:04:47.661 START TEST setup.sh 00:04:47.661 ************************************ 00:04:47.661 11:13:16 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:47.661 * Looking for test storage... 00:04:47.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:47.661 11:13:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:47.661 11:13:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:47.661 11:13:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:47.661 11:13:16 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:47.661 11:13:16 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:47.661 11:13:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.661 ************************************ 00:04:47.661 START TEST acl 00:04:47.661 ************************************ 00:04:47.661 11:13:16 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:47.923 * Looking for test storage... 00:04:47.923 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:47.923 11:13:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:47.923 11:13:16 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:47.923 11:13:16 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:47.923 11:13:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:47.923 11:13:16 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:47.923 11:13:16 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:47.923 11:13:16 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:47.923 11:13:16 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.923 11:13:16 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:47.923 11:13:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:47.923 11:13:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:47.923 11:13:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:47.923 11:13:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:47.923 11:13:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:47.923 11:13:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.923 11:13:16 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.133 11:13:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:52.133 11:13:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:52.133 11:13:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.133 11:13:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:52.133 11:13:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.133 11:13:20 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:55.442 Hugepages 00:04:55.442 node hugesize free / total 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 00:04:55.442 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.442 11:13:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:55.443 11:13:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:55.443 11:13:24 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:55.443 11:13:24 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:55.443 11:13:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:55.443 ************************************ 00:04:55.443 START TEST denied 00:04:55.443 ************************************ 00:04:55.443 11:13:24 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:04:55.443 11:13:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:55.443 11:13:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:55.443 11:13:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:55.443 11:13:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.443 11:13:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:59.656 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.656 11:13:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.881 00:05:03.881 real 0m8.183s 00:05:03.881 user 0m2.609s 00:05:03.881 sys 0m4.775s 00:05:03.881 11:13:32 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:03.881 11:13:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:03.881 ************************************ 00:05:03.881 END TEST denied 00:05:03.881 ************************************ 00:05:03.881 11:13:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:03.881 11:13:32 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:03.881 11:13:32 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:03.881 11:13:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:03.881 ************************************ 00:05:03.881 START TEST allowed 00:05:03.881 ************************************ 00:05:03.881 11:13:32 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:05:03.881 11:13:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:05:03.881 11:13:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:03.881 11:13:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.881 11:13:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:03.881 11:13:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:05:09.219 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:09.219 11:13:37 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:09.219 11:13:37 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:09.219 11:13:37 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:09.219 11:13:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.219 11:13:37 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.428 00:05:13.428 real 0m9.522s 00:05:13.428 user 0m2.814s 00:05:13.428 sys 0m4.915s 00:05:13.428 11:13:41 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.428 11:13:41 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:13.428 ************************************ 00:05:13.428 END TEST allowed 00:05:13.428 ************************************ 00:05:13.428 00:05:13.428 real 0m25.351s 00:05:13.428 user 0m8.298s 00:05:13.428 sys 0m14.649s 00:05:13.428 11:13:41 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.428 11:13:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:13.428 ************************************ 00:05:13.428 END TEST acl 00:05:13.428 ************************************ 00:05:13.428 11:13:41 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:05:13.428 11:13:41 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.428 11:13:41 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.428 11:13:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.428 ************************************ 00:05:13.428 START TEST hugepages 00:05:13.428 ************************************ 00:05:13.428 11:13:41 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:05:13.428 * Looking for test storage... 00:05:13.428 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 107317872 kB' 'MemAvailable: 110532420 kB' 'Buffers: 2704 kB' 'Cached: 10159416 kB' 'SwapCached: 0 kB' 'Active: 7196096 kB' 'Inactive: 3508180 kB' 'Active(anon): 6802480 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545636 kB' 'Mapped: 213084 kB' 'Shmem: 6260324 kB' 'KReclaimable: 274168 kB' 'Slab: 1003916 kB' 'SReclaimable: 274168 kB' 'SUnreclaim: 729748 kB' 'KernelStack: 27232 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460896 kB' 'Committed_AS: 8329324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.428 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.429 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:13.430 11:13:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:13.430 11:13:42 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.430 11:13:42 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.430 11:13:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.430 ************************************ 00:05:13.430 START TEST default_setup 00:05:13.430 ************************************ 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.430 11:13:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:15.977 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:15.977 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:16.238 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.814 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109482992 kB' 'MemAvailable: 112697504 kB' 'Buffers: 2704 kB' 'Cached: 10159536 kB' 'SwapCached: 0 kB' 'Active: 7219348 kB' 'Inactive: 3508180 kB' 'Active(anon): 6825732 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568164 kB' 'Mapped: 214368 kB' 'Shmem: 6260444 kB' 'KReclaimable: 274096 kB' 'Slab: 1001688 kB' 'SReclaimable: 274096 kB' 'SUnreclaim: 727592 kB' 'KernelStack: 27296 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8352856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235352 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.815 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109482812 kB' 'MemAvailable: 112697324 kB' 'Buffers: 2704 kB' 'Cached: 10159540 kB' 'SwapCached: 0 kB' 'Active: 7213852 kB' 'Inactive: 3508180 kB' 'Active(anon): 6820236 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562596 kB' 'Mapped: 213416 kB' 'Shmem: 6260448 kB' 'KReclaimable: 274096 kB' 'Slab: 1001684 kB' 'SReclaimable: 274096 kB' 'SUnreclaim: 727588 kB' 'KernelStack: 27248 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8346752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.816 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.817 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109483448 kB' 'MemAvailable: 112697960 kB' 'Buffers: 2704 kB' 'Cached: 10159556 kB' 'SwapCached: 0 kB' 'Active: 7213488 kB' 'Inactive: 3508180 kB' 'Active(anon): 6819872 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562736 kB' 'Mapped: 213340 kB' 'Shmem: 6260464 kB' 'KReclaimable: 274096 kB' 'Slab: 1001676 kB' 'SReclaimable: 274096 kB' 'SUnreclaim: 727580 kB' 'KernelStack: 27280 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8346772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.818 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.819 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.820 nr_hugepages=1024 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.820 resv_hugepages=0 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.820 surplus_hugepages=0 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.820 anon_hugepages=0 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109484636 kB' 'MemAvailable: 112699148 kB' 'Buffers: 2704 kB' 'Cached: 10159596 kB' 'SwapCached: 0 kB' 'Active: 7213092 kB' 'Inactive: 3508180 kB' 'Active(anon): 6819476 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562292 kB' 'Mapped: 213340 kB' 'Shmem: 6260504 kB' 'KReclaimable: 274096 kB' 'Slab: 1001676 kB' 'SReclaimable: 274096 kB' 'SUnreclaim: 727580 kB' 'KernelStack: 27248 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8346796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.820 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.821 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 54068772 kB' 'MemUsed: 11590240 kB' 'SwapCached: 0 kB' 'Active: 4153500 kB' 'Inactive: 3345236 kB' 'Active(anon): 3973548 kB' 'Inactive(anon): 0 kB' 'Active(file): 179952 kB' 'Inactive(file): 3345236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7132688 kB' 'Mapped: 88564 kB' 'AnonPages: 369204 kB' 'Shmem: 3607500 kB' 'KernelStack: 13720 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 478220 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 346728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.822 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.823 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.823 node0=1024 expecting 1024 00:05:16.824 11:13:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.824 00:05:16.824 real 0m3.485s 00:05:16.824 user 0m1.144s 00:05:16.824 sys 0m2.210s 00:05:16.824 11:13:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:16.824 11:13:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:16.824 ************************************ 00:05:16.824 END TEST default_setup 00:05:16.824 ************************************ 00:05:16.824 11:13:45 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:16.824 11:13:45 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:16.824 11:13:45 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:16.824 11:13:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:16.824 ************************************ 00:05:16.824 START TEST per_node_1G_alloc 00:05:16.824 ************************************ 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.824 11:13:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:20.125 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:20.125 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:20.125 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109533660 kB' 'MemAvailable: 112748168 kB' 'Buffers: 2704 kB' 'Cached: 10159696 kB' 'SwapCached: 0 kB' 'Active: 7212324 kB' 'Inactive: 3508180 kB' 'Active(anon): 6818708 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561356 kB' 'Mapped: 212164 kB' 'Shmem: 6260604 kB' 'KReclaimable: 274088 kB' 'Slab: 1001836 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 727748 kB' 'KernelStack: 27232 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8338900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.704 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.705 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109533688 kB' 'MemAvailable: 112748196 kB' 'Buffers: 2704 kB' 'Cached: 10159700 kB' 'SwapCached: 0 kB' 'Active: 7212848 kB' 'Inactive: 3508180 kB' 'Active(anon): 6819232 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561700 kB' 'Mapped: 212284 kB' 'Shmem: 6260608 kB' 'KReclaimable: 274088 kB' 'Slab: 1001888 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 727800 kB' 'KernelStack: 27328 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8338916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.706 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.707 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109535540 kB' 'MemAvailable: 112750048 kB' 'Buffers: 2704 kB' 'Cached: 10159720 kB' 'SwapCached: 0 kB' 'Active: 7212300 kB' 'Inactive: 3508180 kB' 'Active(anon): 6818684 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561312 kB' 'Mapped: 212284 kB' 'Shmem: 6260628 kB' 'KReclaimable: 274088 kB' 'Slab: 1001888 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 727800 kB' 'KernelStack: 27264 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8337380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.708 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.709 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.710 nr_hugepages=1024 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.710 resv_hugepages=0 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.710 surplus_hugepages=0 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.710 anon_hugepages=0 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109535136 kB' 'MemAvailable: 112749644 kB' 'Buffers: 2704 kB' 'Cached: 10159736 kB' 'SwapCached: 0 kB' 'Active: 7213888 kB' 'Inactive: 3508180 kB' 'Active(anon): 6820272 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562920 kB' 'Mapped: 212284 kB' 'Shmem: 6260644 kB' 'KReclaimable: 274088 kB' 'Slab: 1001888 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 727800 kB' 'KernelStack: 27296 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8356008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.710 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.711 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 55145176 kB' 'MemUsed: 10513836 kB' 'SwapCached: 0 kB' 'Active: 4155252 kB' 'Inactive: 3345236 kB' 'Active(anon): 3975300 kB' 'Inactive(anon): 0 kB' 'Active(file): 179952 kB' 'Inactive(file): 3345236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7132740 kB' 'Mapped: 87700 kB' 'AnonPages: 370868 kB' 'Shmem: 3607552 kB' 'KernelStack: 13608 kB' 'PageTables: 5016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 478460 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 346968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679880 kB' 'MemFree: 54392508 kB' 'MemUsed: 6287372 kB' 'SwapCached: 0 kB' 'Active: 3057644 kB' 'Inactive: 162944 kB' 'Active(anon): 2843980 kB' 'Inactive(anon): 0 kB' 'Active(file): 213664 kB' 'Inactive(file): 162944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3029748 kB' 'Mapped: 124592 kB' 'AnonPages: 191056 kB' 'Shmem: 2653140 kB' 'KernelStack: 13560 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142596 kB' 'Slab: 523428 kB' 'SReclaimable: 142596 kB' 'SUnreclaim: 380832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:20.716 node0=512 expecting 512 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:20.716 node1=512 expecting 512 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:20.716 00:05:20.716 real 0m3.829s 00:05:20.716 user 0m1.548s 00:05:20.716 sys 0m2.316s 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.716 11:13:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:20.716 ************************************ 00:05:20.716 END TEST per_node_1G_alloc 00:05:20.716 ************************************ 00:05:20.716 11:13:49 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:20.716 11:13:49 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.716 11:13:49 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.716 11:13:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.716 ************************************ 00:05:20.716 START TEST even_2G_alloc 00:05:20.716 ************************************ 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.716 11:13:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:24.021 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:24.021 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:24.021 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.601 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109525088 kB' 'MemAvailable: 112739596 kB' 'Buffers: 2704 kB' 'Cached: 10159900 kB' 'SwapCached: 0 kB' 'Active: 7214916 kB' 'Inactive: 3508180 kB' 'Active(anon): 6821300 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563920 kB' 'Mapped: 213320 kB' 'Shmem: 6260808 kB' 'KReclaimable: 274088 kB' 'Slab: 1002580 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 728492 kB' 'KernelStack: 27520 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8374408 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235684 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.602 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109524224 kB' 'MemAvailable: 112738732 kB' 'Buffers: 2704 kB' 'Cached: 10159904 kB' 'SwapCached: 0 kB' 'Active: 7214356 kB' 'Inactive: 3508180 kB' 'Active(anon): 6820740 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563372 kB' 'Mapped: 213284 kB' 'Shmem: 6260812 kB' 'KReclaimable: 274088 kB' 'Slab: 1002568 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 728480 kB' 'KernelStack: 27424 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8374424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.603 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.604 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.605 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109523444 kB' 'MemAvailable: 112737952 kB' 'Buffers: 2704 kB' 'Cached: 10159924 kB' 'SwapCached: 0 kB' 'Active: 7214720 kB' 'Inactive: 3508180 kB' 'Active(anon): 6821104 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563696 kB' 'Mapped: 213284 kB' 'Shmem: 6260832 kB' 'KReclaimable: 274088 kB' 'Slab: 1002580 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 728492 kB' 'KernelStack: 27408 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8374444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.606 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.607 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.608 nr_hugepages=1024 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.608 resv_hugepages=0 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.608 surplus_hugepages=0 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.608 anon_hugepages=0 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109523000 kB' 'MemAvailable: 112737508 kB' 'Buffers: 2704 kB' 'Cached: 10159944 kB' 'SwapCached: 0 kB' 'Active: 7214464 kB' 'Inactive: 3508180 kB' 'Active(anon): 6820848 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563444 kB' 'Mapped: 213284 kB' 'Shmem: 6260852 kB' 'KReclaimable: 274088 kB' 'Slab: 1002580 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 728492 kB' 'KernelStack: 27344 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8371612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.608 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.609 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 55136464 kB' 'MemUsed: 10522548 kB' 'SwapCached: 0 kB' 'Active: 4155848 kB' 'Inactive: 3345236 kB' 'Active(anon): 3975896 kB' 'Inactive(anon): 0 kB' 'Active(file): 179952 kB' 'Inactive(file): 3345236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7132772 kB' 'Mapped: 87736 kB' 'AnonPages: 371524 kB' 'Shmem: 3607584 kB' 'KernelStack: 13720 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 478440 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 346948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.610 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.611 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679880 kB' 'MemFree: 54386688 kB' 'MemUsed: 6293192 kB' 'SwapCached: 0 kB' 'Active: 3057948 kB' 'Inactive: 162944 kB' 'Active(anon): 2844284 kB' 'Inactive(anon): 0 kB' 'Active(file): 213664 kB' 'Inactive(file): 162944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3029900 kB' 'Mapped: 125548 kB' 'AnonPages: 191220 kB' 'Shmem: 2653292 kB' 'KernelStack: 13592 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142596 kB' 'Slab: 523724 kB' 'SReclaimable: 142596 kB' 'SUnreclaim: 381128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.612 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:24.613 node0=512 expecting 512 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.613 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.614 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:24.614 node1=512 expecting 512 00:05:24.614 11:13:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:24.614 00:05:24.614 real 0m3.854s 00:05:24.614 user 0m1.507s 00:05:24.614 sys 0m2.394s 00:05:24.614 11:13:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:24.614 11:13:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:24.614 ************************************ 00:05:24.614 END TEST even_2G_alloc 00:05:24.614 ************************************ 00:05:24.614 11:13:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:24.614 11:13:53 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:24.614 11:13:53 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:24.614 11:13:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.614 ************************************ 00:05:24.614 START TEST odd_alloc 00:05:24.614 ************************************ 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.614 11:13:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:28.112 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:28.112 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:28.112 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:28.112 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:28.112 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:28.112 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:28.112 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:28.112 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:28.112 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:28.113 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:28.113 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:28.113 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:28.113 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:28.113 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:28.113 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:28.113 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:28.113 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109523344 kB' 'MemAvailable: 112737852 kB' 'Buffers: 2704 kB' 'Cached: 10160072 kB' 'SwapCached: 0 kB' 'Active: 7215376 kB' 'Inactive: 3508180 kB' 'Active(anon): 6821760 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564100 kB' 'Mapped: 213312 kB' 'Shmem: 6260980 kB' 'KReclaimable: 274088 kB' 'Slab: 1002284 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 728196 kB' 'KernelStack: 27344 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508448 kB' 'Committed_AS: 8372372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.379 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.380 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109522588 kB' 'MemAvailable: 112737096 kB' 'Buffers: 2704 kB' 'Cached: 10160076 kB' 'SwapCached: 0 kB' 'Active: 7215044 kB' 'Inactive: 3508180 kB' 'Active(anon): 6821428 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563816 kB' 'Mapped: 213304 kB' 'Shmem: 6260984 kB' 'KReclaimable: 274088 kB' 'Slab: 1002284 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 728196 kB' 'KernelStack: 27328 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508448 kB' 'Committed_AS: 8372388 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.381 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109523516 kB' 'MemAvailable: 112738024 kB' 'Buffers: 2704 kB' 'Cached: 10160076 kB' 'SwapCached: 0 kB' 'Active: 7215052 kB' 'Inactive: 3508180 kB' 'Active(anon): 6821436 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563828 kB' 'Mapped: 213304 kB' 'Shmem: 6260984 kB' 'KReclaimable: 274088 kB' 'Slab: 1002300 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 728212 kB' 'KernelStack: 27344 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508448 kB' 'Committed_AS: 8372408 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.382 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.383 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:28.384 nr_hugepages=1025 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.384 resv_hugepages=0 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.384 surplus_hugepages=0 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.384 anon_hugepages=0 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.384 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109524240 kB' 'MemAvailable: 112738748 kB' 'Buffers: 2704 kB' 'Cached: 10160076 kB' 'SwapCached: 0 kB' 'Active: 7215200 kB' 'Inactive: 3508180 kB' 'Active(anon): 6821584 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563980 kB' 'Mapped: 213304 kB' 'Shmem: 6260984 kB' 'KReclaimable: 274088 kB' 'Slab: 1002300 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 728212 kB' 'KernelStack: 27328 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508448 kB' 'Committed_AS: 8372428 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.385 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 55137852 kB' 'MemUsed: 10521160 kB' 'SwapCached: 0 kB' 'Active: 4155264 kB' 'Inactive: 3345236 kB' 'Active(anon): 3975312 kB' 'Inactive(anon): 0 kB' 'Active(file): 179952 kB' 'Inactive(file): 3345236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7132796 kB' 'Mapped: 87736 kB' 'AnonPages: 370980 kB' 'Shmem: 3607608 kB' 'KernelStack: 13736 kB' 'PageTables: 5172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 478536 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 347044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.386 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.387 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679880 kB' 'MemFree: 54386244 kB' 'MemUsed: 6293636 kB' 'SwapCached: 0 kB' 'Active: 3059632 kB' 'Inactive: 162944 kB' 'Active(anon): 2845968 kB' 'Inactive(anon): 0 kB' 'Active(file): 213664 kB' 'Inactive(file): 162944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3030044 kB' 'Mapped: 125568 kB' 'AnonPages: 192560 kB' 'Shmem: 2653436 kB' 'KernelStack: 13560 kB' 'PageTables: 3440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142596 kB' 'Slab: 523764 kB' 'SReclaimable: 142596 kB' 'SUnreclaim: 381168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.651 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:28.652 node0=512 expecting 513 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:28.652 node1=513 expecting 512 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:28.652 00:05:28.652 real 0m3.824s 00:05:28.652 user 0m1.463s 00:05:28.652 sys 0m2.402s 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.652 11:13:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:28.652 ************************************ 00:05:28.652 END TEST odd_alloc 00:05:28.652 ************************************ 00:05:28.652 11:13:57 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:28.652 11:13:57 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.652 11:13:57 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.652 11:13:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.652 ************************************ 00:05:28.652 START TEST custom_alloc 00:05:28.652 ************************************ 00:05:28.652 11:13:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:05:28.652 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:28.652 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:28.652 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:28.652 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:28.652 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.653 11:13:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:31.953 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:31.953 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:31.953 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:32.220 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:32.220 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:32.220 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:32.220 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:32.220 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 108473640 kB' 'MemAvailable: 111688148 kB' 'Buffers: 2704 kB' 'Cached: 10160252 kB' 'SwapCached: 0 kB' 'Active: 7217084 kB' 'Inactive: 3508180 kB' 'Active(anon): 6823468 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565596 kB' 'Mapped: 213328 kB' 'Shmem: 6261160 kB' 'KReclaimable: 274088 kB' 'Slab: 1003676 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 729588 kB' 'KernelStack: 27392 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985184 kB' 'Committed_AS: 8373448 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 108473624 kB' 'MemAvailable: 111688132 kB' 'Buffers: 2704 kB' 'Cached: 10160256 kB' 'SwapCached: 0 kB' 'Active: 7217380 kB' 'Inactive: 3508180 kB' 'Active(anon): 6823764 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566120 kB' 'Mapped: 213328 kB' 'Shmem: 6261164 kB' 'KReclaimable: 274088 kB' 'Slab: 1003692 kB' 'SReclaimable: 274088 kB' 'SUnreclaim: 729604 kB' 'KernelStack: 27392 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985184 kB' 'Committed_AS: 8373376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 108472716 kB' 'MemAvailable: 111687208 kB' 'Buffers: 2704 kB' 'Cached: 10160272 kB' 'SwapCached: 0 kB' 'Active: 7216412 kB' 'Inactive: 3508180 kB' 'Active(anon): 6822796 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564972 kB' 'Mapped: 213320 kB' 'Shmem: 6261180 kB' 'KReclaimable: 274056 kB' 'Slab: 1003716 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729660 kB' 'KernelStack: 27344 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985184 kB' 'Committed_AS: 8373396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.224 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.225 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:32.226 nr_hugepages=1536 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:32.226 resv_hugepages=0 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:32.226 surplus_hugepages=0 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:32.226 anon_hugepages=0 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 108472716 kB' 'MemAvailable: 111687208 kB' 'Buffers: 2704 kB' 'Cached: 10160312 kB' 'SwapCached: 0 kB' 'Active: 7216416 kB' 'Inactive: 3508180 kB' 'Active(anon): 6822800 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564908 kB' 'Mapped: 213320 kB' 'Shmem: 6261220 kB' 'KReclaimable: 274056 kB' 'Slab: 1003716 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729660 kB' 'KernelStack: 27328 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985184 kB' 'Committed_AS: 8373420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.226 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.227 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 55148332 kB' 'MemUsed: 10510680 kB' 'SwapCached: 0 kB' 'Active: 4157228 kB' 'Inactive: 3345236 kB' 'Active(anon): 3977276 kB' 'Inactive(anon): 0 kB' 'Active(file): 179952 kB' 'Inactive(file): 3345236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7132968 kB' 'Mapped: 87736 kB' 'AnonPages: 372668 kB' 'Shmem: 3607780 kB' 'KernelStack: 13704 kB' 'PageTables: 5088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 479304 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 347812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.228 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.229 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679880 kB' 'MemFree: 53324384 kB' 'MemUsed: 7355496 kB' 'SwapCached: 0 kB' 'Active: 3059388 kB' 'Inactive: 162944 kB' 'Active(anon): 2845724 kB' 'Inactive(anon): 0 kB' 'Active(file): 213664 kB' 'Inactive(file): 162944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3030048 kB' 'Mapped: 125584 kB' 'AnonPages: 192440 kB' 'Shmem: 2653440 kB' 'KernelStack: 13624 kB' 'PageTables: 3580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142564 kB' 'Slab: 524412 kB' 'SReclaimable: 142564 kB' 'SUnreclaim: 381848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.230 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.492 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.493 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:32.494 node0=512 expecting 512 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:32.494 node1=1024 expecting 1024 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:32.494 00:05:32.494 real 0m3.759s 00:05:32.494 user 0m1.542s 00:05:32.494 sys 0m2.242s 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.494 11:14:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:32.494 ************************************ 00:05:32.494 END TEST custom_alloc 00:05:32.494 ************************************ 00:05:32.494 11:14:01 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:32.494 11:14:01 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:32.494 11:14:01 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.494 11:14:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.494 ************************************ 00:05:32.494 START TEST no_shrink_alloc 00:05:32.494 ************************************ 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.494 11:14:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:35.799 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:35.799 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:35.799 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:35.799 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:35.799 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109510444 kB' 'MemAvailable: 112724936 kB' 'Buffers: 2704 kB' 'Cached: 10160424 kB' 'SwapCached: 0 kB' 'Active: 7216944 kB' 'Inactive: 3508180 kB' 'Active(anon): 6823328 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564924 kB' 'Mapped: 212468 kB' 'Shmem: 6261332 kB' 'KReclaimable: 274056 kB' 'Slab: 1003480 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729424 kB' 'KernelStack: 27280 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8339776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.800 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.801 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109511352 kB' 'MemAvailable: 112725844 kB' 'Buffers: 2704 kB' 'Cached: 10160428 kB' 'SwapCached: 0 kB' 'Active: 7216492 kB' 'Inactive: 3508180 kB' 'Active(anon): 6822876 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564920 kB' 'Mapped: 212372 kB' 'Shmem: 6261336 kB' 'KReclaimable: 274056 kB' 'Slab: 1003464 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729408 kB' 'KernelStack: 27248 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8339796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.802 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.803 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109511352 kB' 'MemAvailable: 112725844 kB' 'Buffers: 2704 kB' 'Cached: 10160428 kB' 'SwapCached: 0 kB' 'Active: 7216828 kB' 'Inactive: 3508180 kB' 'Active(anon): 6823212 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565328 kB' 'Mapped: 212372 kB' 'Shmem: 6261336 kB' 'KReclaimable: 274056 kB' 'Slab: 1003464 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729408 kB' 'KernelStack: 27280 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8339816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.804 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.805 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.806 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:35.807 nr_hugepages=1024 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:35.807 resv_hugepages=0 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:35.807 surplus_hugepages=0 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:35.807 anon_hugepages=0 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109513316 kB' 'MemAvailable: 112727808 kB' 'Buffers: 2704 kB' 'Cached: 10160468 kB' 'SwapCached: 0 kB' 'Active: 7216412 kB' 'Inactive: 3508180 kB' 'Active(anon): 6822796 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564800 kB' 'Mapped: 212372 kB' 'Shmem: 6261376 kB' 'KReclaimable: 274056 kB' 'Slab: 1003464 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729408 kB' 'KernelStack: 27232 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8341084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.807 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.808 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 54098028 kB' 'MemUsed: 11560984 kB' 'SwapCached: 0 kB' 'Active: 4155744 kB' 'Inactive: 3345236 kB' 'Active(anon): 3975792 kB' 'Inactive(anon): 0 kB' 'Active(file): 179952 kB' 'Inactive(file): 3345236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7133088 kB' 'Mapped: 87704 kB' 'AnonPages: 371048 kB' 'Shmem: 3607900 kB' 'KernelStack: 13672 kB' 'PageTables: 5036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 479304 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 347812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.809 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.810 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:35.811 node0=1024 expecting 1024 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:35.811 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:36.073 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.073 11:14:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:39.384 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:39.384 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:39.384 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109514696 kB' 'MemAvailable: 112729188 kB' 'Buffers: 2704 kB' 'Cached: 10160580 kB' 'SwapCached: 0 kB' 'Active: 7218872 kB' 'Inactive: 3508180 kB' 'Active(anon): 6825256 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567660 kB' 'Mapped: 212292 kB' 'Shmem: 6261488 kB' 'KReclaimable: 274056 kB' 'Slab: 1003520 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729464 kB' 'KernelStack: 27344 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8343544 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.384 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.385 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109514260 kB' 'MemAvailable: 112728752 kB' 'Buffers: 2704 kB' 'Cached: 10160580 kB' 'SwapCached: 0 kB' 'Active: 7218380 kB' 'Inactive: 3508180 kB' 'Active(anon): 6824764 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566528 kB' 'Mapped: 212256 kB' 'Shmem: 6261488 kB' 'KReclaimable: 274056 kB' 'Slab: 1003548 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729492 kB' 'KernelStack: 27296 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8343808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.386 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.387 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.387 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.387 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.387 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.387 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.387 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.653 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109513876 kB' 'MemAvailable: 112728368 kB' 'Buffers: 2704 kB' 'Cached: 10160600 kB' 'SwapCached: 0 kB' 'Active: 7218292 kB' 'Inactive: 3508180 kB' 'Active(anon): 6824676 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566472 kB' 'Mapped: 212256 kB' 'Shmem: 6261508 kB' 'KReclaimable: 274056 kB' 'Slab: 1003484 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729428 kB' 'KernelStack: 27408 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8343832 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.654 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.655 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:39.656 nr_hugepages=1024 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.656 resv_hugepages=0 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.656 surplus_hugepages=0 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.656 anon_hugepages=0 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338892 kB' 'MemFree: 109513620 kB' 'MemAvailable: 112728112 kB' 'Buffers: 2704 kB' 'Cached: 10160620 kB' 'SwapCached: 0 kB' 'Active: 7217844 kB' 'Inactive: 3508180 kB' 'Active(anon): 6824228 kB' 'Inactive(anon): 0 kB' 'Active(file): 393616 kB' 'Inactive(file): 3508180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566480 kB' 'Mapped: 212264 kB' 'Shmem: 6261528 kB' 'KReclaimable: 274056 kB' 'Slab: 1003484 kB' 'SReclaimable: 274056 kB' 'SUnreclaim: 729428 kB' 'KernelStack: 27296 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 8342124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 110592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3097976 kB' 'DirectMap2M: 31184896 kB' 'DirectMap1G: 101711872 kB' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.656 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.657 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 54100476 kB' 'MemUsed: 11558536 kB' 'SwapCached: 0 kB' 'Active: 4159432 kB' 'Inactive: 3345236 kB' 'Active(anon): 3979480 kB' 'Inactive(anon): 0 kB' 'Active(file): 179952 kB' 'Inactive(file): 3345236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7133240 kB' 'Mapped: 87576 kB' 'AnonPages: 374624 kB' 'Shmem: 3608052 kB' 'KernelStack: 13864 kB' 'PageTables: 5576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131492 kB' 'Slab: 479432 kB' 'SReclaimable: 131492 kB' 'SUnreclaim: 347940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.658 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.659 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:39.660 node0=1024 expecting 1024 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:39.660 00:05:39.660 real 0m7.193s 00:05:39.660 user 0m2.737s 00:05:39.660 sys 0m4.521s 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:39.660 11:14:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:39.660 ************************************ 00:05:39.660 END TEST no_shrink_alloc 00:05:39.660 ************************************ 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:39.660 11:14:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:39.660 00:05:39.660 real 0m26.554s 00:05:39.660 user 0m10.169s 00:05:39.660 sys 0m16.503s 00:05:39.660 11:14:08 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:39.660 11:14:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:39.660 ************************************ 00:05:39.660 END TEST hugepages 00:05:39.660 ************************************ 00:05:39.660 11:14:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:39.660 11:14:08 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:39.660 11:14:08 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:39.660 11:14:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:39.660 ************************************ 00:05:39.660 START TEST driver 00:05:39.660 ************************************ 00:05:39.660 11:14:08 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:39.922 * Looking for test storage... 00:05:39.922 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:39.922 11:14:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:39.922 11:14:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.922 11:14:08 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:45.232 11:14:13 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:45.232 11:14:13 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:45.232 11:14:13 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:45.232 11:14:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:45.232 ************************************ 00:05:45.232 START TEST guess_driver 00:05:45.232 ************************************ 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 319 > 0 )) 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:45.232 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:45.232 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:45.232 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:45.232 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:45.232 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:45.232 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:45.232 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:45.232 Looking for driver=vfio-pci 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.232 11:14:13 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:48.539 11:14:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:48.540 11:14:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:52.798 00:05:52.798 real 0m8.124s 00:05:52.798 user 0m2.517s 00:05:52.798 sys 0m4.636s 00:05:52.798 11:14:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:52.798 11:14:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:52.798 ************************************ 00:05:52.798 END TEST guess_driver 00:05:52.798 ************************************ 00:05:53.059 00:05:53.059 real 0m13.175s 00:05:53.059 user 0m4.050s 00:05:53.059 sys 0m7.351s 00:05:53.059 11:14:21 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.059 11:14:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:53.059 ************************************ 00:05:53.059 END TEST driver 00:05:53.059 ************************************ 00:05:53.059 11:14:21 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:53.059 11:14:21 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.059 11:14:21 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.059 11:14:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:53.059 ************************************ 00:05:53.059 START TEST devices 00:05:53.059 ************************************ 00:05:53.059 11:14:21 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:53.059 * Looking for test storage... 00:05:53.059 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:53.059 11:14:21 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:53.059 11:14:21 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:53.059 11:14:21 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:53.059 11:14:21 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:57.265 11:14:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:57.265 11:14:25 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:57.265 No valid GPT data, bailing 00:05:57.265 11:14:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:57.265 11:14:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:57.265 11:14:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:57.265 11:14:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:57.265 11:14:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:57.265 11:14:25 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:57.265 11:14:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.265 11:14:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:57.265 ************************************ 00:05:57.265 START TEST nvme_mount 00:05:57.265 ************************************ 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:57.265 11:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:58.207 Creating new GPT entries in memory. 00:05:58.207 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:58.207 other utilities. 00:05:58.207 11:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:58.207 11:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:58.207 11:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:58.207 11:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:58.207 11:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:59.149 Creating new GPT entries in memory. 00:05:59.149 The operation has completed successfully. 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3385485 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.149 11:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.453 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.454 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.714 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:02.714 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:02.714 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:02.715 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:02.715 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:02.975 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:02.975 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:02.975 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:02.975 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:02.975 11:14:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.277 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.537 11:14:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.840 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.841 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.841 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.841 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.841 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.841 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.841 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:10.102 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:10.102 00:06:10.102 real 0m12.986s 00:06:10.102 user 0m3.951s 00:06:10.102 sys 0m6.796s 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:10.102 11:14:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:10.102 ************************************ 00:06:10.102 END TEST nvme_mount 00:06:10.102 ************************************ 00:06:10.102 11:14:38 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:10.102 11:14:38 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:10.103 11:14:38 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:10.103 11:14:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:10.103 ************************************ 00:06:10.103 START TEST dm_mount 00:06:10.103 ************************************ 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:10.103 11:14:38 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:11.047 Creating new GPT entries in memory. 00:06:11.047 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:11.047 other utilities. 00:06:11.047 11:14:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:11.047 11:14:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.047 11:14:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:11.047 11:14:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:11.047 11:14:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:12.430 Creating new GPT entries in memory. 00:06:12.430 The operation has completed successfully. 00:06:12.430 11:14:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:12.430 11:14:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:12.430 11:14:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:12.430 11:14:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:12.430 11:14:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:13.369 The operation has completed successfully. 00:06:13.369 11:14:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:13.369 11:14:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:13.369 11:14:41 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3390413 00:06:13.369 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.370 11:14:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.678 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.940 11:14:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:20.238 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.239 11:14:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:20.499 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:20.499 00:06:20.499 real 0m10.413s 00:06:20.499 user 0m2.828s 00:06:20.499 sys 0m4.601s 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.499 11:14:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:20.499 ************************************ 00:06:20.499 END TEST dm_mount 00:06:20.499 ************************************ 00:06:20.499 11:14:49 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:20.499 11:14:49 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:20.499 11:14:49 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:20.499 11:14:49 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:20.499 11:14:49 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:20.499 11:14:49 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:20.499 11:14:49 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:20.760 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:20.760 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:20.760 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:20.760 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:20.760 11:14:49 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:20.760 11:14:49 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:20.760 11:14:49 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:20.760 11:14:49 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:20.760 11:14:49 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:20.760 11:14:49 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:20.760 11:14:49 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:20.760 00:06:20.760 real 0m27.807s 00:06:20.760 user 0m8.375s 00:06:20.760 sys 0m14.057s 00:06:20.760 11:14:49 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.760 11:14:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:20.760 ************************************ 00:06:20.760 END TEST devices 00:06:20.760 ************************************ 00:06:20.760 00:06:20.760 real 1m33.309s 00:06:20.760 user 0m31.050s 00:06:20.760 sys 0m52.850s 00:06:20.760 11:14:49 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.760 11:14:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:20.760 ************************************ 00:06:20.760 END TEST setup.sh 00:06:20.760 ************************************ 00:06:21.021 11:14:49 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:06:24.319 Hugepages 00:06:24.319 node hugesize free / total 00:06:24.319 node0 1048576kB 0 / 0 00:06:24.319 node0 2048kB 2048 / 2048 00:06:24.319 node1 1048576kB 0 / 0 00:06:24.319 node1 2048kB 0 / 0 00:06:24.319 00:06:24.319 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:24.319 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:24.319 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:24.319 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:24.319 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:24.319 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:24.319 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:24.319 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:24.319 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:24.319 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:24.319 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:24.319 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:24.319 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:24.319 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:24.319 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:24.319 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:24.319 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:24.319 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:24.319 11:14:53 -- spdk/autotest.sh@130 -- # uname -s 00:06:24.319 11:14:53 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:24.319 11:14:53 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:24.319 11:14:53 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:27.625 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:27.625 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:27.886 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:29.802 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:30.063 11:14:58 -- common/autotest_common.sh@1531 -- # sleep 1 00:06:31.004 11:14:59 -- common/autotest_common.sh@1532 -- # bdfs=() 00:06:31.004 11:14:59 -- common/autotest_common.sh@1532 -- # local bdfs 00:06:31.004 11:14:59 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:06:31.004 11:14:59 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:06:31.004 11:14:59 -- common/autotest_common.sh@1512 -- # bdfs=() 00:06:31.004 11:14:59 -- common/autotest_common.sh@1512 -- # local bdfs 00:06:31.004 11:14:59 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:31.004 11:14:59 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:06:31.004 11:14:59 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:31.004 11:14:59 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:06:31.004 11:14:59 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:06:31.004 11:14:59 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:34.344 Waiting for block devices as requested 00:06:34.344 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:34.605 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:34.605 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:34.605 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:34.866 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:34.866 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:34.866 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:35.127 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:35.127 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:35.387 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:35.387 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:35.387 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:35.387 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:35.648 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:35.648 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:35.648 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:35.648 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:35.909 11:15:04 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:06:35.909 11:15:04 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:35.909 11:15:04 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:06:36.169 11:15:04 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:06:36.169 11:15:04 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:36.170 11:15:04 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:36.170 11:15:04 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:36.170 11:15:04 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:06:36.170 11:15:04 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:06:36.170 11:15:04 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:06:36.170 11:15:04 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:06:36.170 11:15:04 -- common/autotest_common.sh@1544 -- # grep oacs 00:06:36.170 11:15:04 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:06:36.170 11:15:04 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:06:36.170 11:15:04 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:06:36.170 11:15:04 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:06:36.170 11:15:04 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:06:36.170 11:15:04 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:06:36.170 11:15:04 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:06:36.170 11:15:04 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:06:36.170 11:15:04 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:06:36.170 11:15:04 -- common/autotest_common.sh@1556 -- # continue 00:06:36.170 11:15:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:36.170 11:15:04 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:36.170 11:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:36.170 11:15:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:36.170 11:15:04 -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:36.170 11:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:36.170 11:15:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:38.788 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:38.788 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:38.788 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:39.049 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:39.310 11:15:08 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:39.310 11:15:08 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:39.310 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:06:39.571 11:15:08 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:39.571 11:15:08 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:06:39.571 11:15:08 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:06:39.571 11:15:08 -- common/autotest_common.sh@1576 -- # bdfs=() 00:06:39.571 11:15:08 -- common/autotest_common.sh@1576 -- # local bdfs 00:06:39.571 11:15:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:06:39.571 11:15:08 -- common/autotest_common.sh@1512 -- # bdfs=() 00:06:39.571 11:15:08 -- common/autotest_common.sh@1512 -- # local bdfs 00:06:39.571 11:15:08 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:39.571 11:15:08 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:39.571 11:15:08 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:06:39.571 11:15:08 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:06:39.571 11:15:08 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:06:39.571 11:15:08 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:06:39.571 11:15:08 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:39.571 11:15:08 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:06:39.571 11:15:08 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:39.571 11:15:08 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:06:39.571 11:15:08 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:06:39.571 11:15:08 -- common/autotest_common.sh@1592 -- # return 0 00:06:39.571 11:15:08 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:39.571 11:15:08 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:39.571 11:15:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:39.571 11:15:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:39.571 11:15:08 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:39.571 11:15:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:39.571 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:06:39.571 11:15:08 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:39.571 11:15:08 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:39.571 11:15:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:39.571 11:15:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.571 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:06:39.571 ************************************ 00:06:39.571 START TEST env 00:06:39.571 ************************************ 00:06:39.571 11:15:08 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:39.832 * Looking for test storage... 00:06:39.832 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:06:39.832 11:15:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:39.832 11:15:08 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:39.832 11:15:08 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.832 11:15:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.832 ************************************ 00:06:39.832 START TEST env_memory 00:06:39.832 ************************************ 00:06:39.832 11:15:08 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:39.832 00:06:39.832 00:06:39.832 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.832 http://cunit.sourceforge.net/ 00:06:39.832 00:06:39.832 00:06:39.832 Suite: memory 00:06:39.832 Test: alloc and free memory map ...[2024-06-10 11:15:08.661010] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:39.832 passed 00:06:39.832 Test: mem map translation ...[2024-06-10 11:15:08.688818] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:39.832 [2024-06-10 11:15:08.688848] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:39.832 [2024-06-10 11:15:08.688900] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:39.832 [2024-06-10 11:15:08.688913] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:39.832 passed 00:06:39.832 Test: mem map registration ...[2024-06-10 11:15:08.747991] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:39.832 [2024-06-10 11:15:08.748014] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:39.832 passed 00:06:40.094 Test: mem map adjacent registrations ...passed 00:06:40.094 00:06:40.094 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.094 suites 1 1 n/a 0 0 00:06:40.094 tests 4 4 4 0 0 00:06:40.094 asserts 152 152 152 0 n/a 00:06:40.094 00:06:40.094 Elapsed time = 0.205 seconds 00:06:40.094 00:06:40.094 real 0m0.219s 00:06:40.094 user 0m0.209s 00:06:40.094 sys 0m0.009s 00:06:40.094 11:15:08 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.094 11:15:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:40.094 ************************************ 00:06:40.094 END TEST env_memory 00:06:40.094 ************************************ 00:06:40.094 11:15:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:40.094 11:15:08 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:40.094 11:15:08 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.094 11:15:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.094 ************************************ 00:06:40.094 START TEST env_vtophys 00:06:40.094 ************************************ 00:06:40.094 11:15:08 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:40.094 EAL: lib.eal log level changed from notice to debug 00:06:40.094 EAL: Detected lcore 0 as core 0 on socket 0 00:06:40.094 EAL: Detected lcore 1 as core 1 on socket 0 00:06:40.094 EAL: Detected lcore 2 as core 2 on socket 0 00:06:40.094 EAL: Detected lcore 3 as core 3 on socket 0 00:06:40.094 EAL: Detected lcore 4 as core 4 on socket 0 00:06:40.094 EAL: Detected lcore 5 as core 5 on socket 0 00:06:40.094 EAL: Detected lcore 6 as core 6 on socket 0 00:06:40.094 EAL: Detected lcore 7 as core 7 on socket 0 00:06:40.094 EAL: Detected lcore 8 as core 8 on socket 0 00:06:40.094 EAL: Detected lcore 9 as core 9 on socket 0 00:06:40.094 EAL: Detected lcore 10 as core 10 on socket 0 00:06:40.094 EAL: Detected lcore 11 as core 11 on socket 0 00:06:40.094 EAL: Detected lcore 12 as core 12 on socket 0 00:06:40.094 EAL: Detected lcore 13 as core 13 on socket 0 00:06:40.094 EAL: Detected lcore 14 as core 14 on socket 0 00:06:40.094 EAL: Detected lcore 15 as core 15 on socket 0 00:06:40.094 EAL: Detected lcore 16 as core 16 on socket 0 00:06:40.094 EAL: Detected lcore 17 as core 17 on socket 0 00:06:40.094 EAL: Detected lcore 18 as core 18 on socket 0 00:06:40.094 EAL: Detected lcore 19 as core 19 on socket 0 00:06:40.094 EAL: Detected lcore 20 as core 20 on socket 0 00:06:40.094 EAL: Detected lcore 21 as core 21 on socket 0 00:06:40.094 EAL: Detected lcore 22 as core 22 on socket 0 00:06:40.094 EAL: Detected lcore 23 as core 23 on socket 0 00:06:40.094 EAL: Detected lcore 24 as core 24 on socket 0 00:06:40.094 EAL: Detected lcore 25 as core 25 on socket 0 00:06:40.094 EAL: Detected lcore 26 as core 26 on socket 0 00:06:40.094 EAL: Detected lcore 27 as core 27 on socket 0 00:06:40.094 EAL: Detected lcore 28 as core 28 on socket 0 00:06:40.094 EAL: Detected lcore 29 as core 29 on socket 0 00:06:40.094 EAL: Detected lcore 30 as core 30 on socket 0 00:06:40.094 EAL: Detected lcore 31 as core 31 on socket 0 00:06:40.094 EAL: Detected lcore 32 as core 32 on socket 0 00:06:40.094 EAL: Detected lcore 33 as core 33 on socket 0 00:06:40.094 EAL: Detected lcore 34 as core 34 on socket 0 00:06:40.094 EAL: Detected lcore 35 as core 35 on socket 0 00:06:40.094 EAL: Detected lcore 36 as core 0 on socket 1 00:06:40.094 EAL: Detected lcore 37 as core 1 on socket 1 00:06:40.094 EAL: Detected lcore 38 as core 2 on socket 1 00:06:40.094 EAL: Detected lcore 39 as core 3 on socket 1 00:06:40.094 EAL: Detected lcore 40 as core 4 on socket 1 00:06:40.094 EAL: Detected lcore 41 as core 5 on socket 1 00:06:40.094 EAL: Detected lcore 42 as core 6 on socket 1 00:06:40.094 EAL: Detected lcore 43 as core 7 on socket 1 00:06:40.094 EAL: Detected lcore 44 as core 8 on socket 1 00:06:40.094 EAL: Detected lcore 45 as core 9 on socket 1 00:06:40.094 EAL: Detected lcore 46 as core 10 on socket 1 00:06:40.094 EAL: Detected lcore 47 as core 11 on socket 1 00:06:40.094 EAL: Detected lcore 48 as core 12 on socket 1 00:06:40.094 EAL: Detected lcore 49 as core 13 on socket 1 00:06:40.094 EAL: Detected lcore 50 as core 14 on socket 1 00:06:40.094 EAL: Detected lcore 51 as core 15 on socket 1 00:06:40.094 EAL: Detected lcore 52 as core 16 on socket 1 00:06:40.094 EAL: Detected lcore 53 as core 17 on socket 1 00:06:40.094 EAL: Detected lcore 54 as core 18 on socket 1 00:06:40.094 EAL: Detected lcore 55 as core 19 on socket 1 00:06:40.094 EAL: Detected lcore 56 as core 20 on socket 1 00:06:40.094 EAL: Detected lcore 57 as core 21 on socket 1 00:06:40.094 EAL: Detected lcore 58 as core 22 on socket 1 00:06:40.094 EAL: Detected lcore 59 as core 23 on socket 1 00:06:40.094 EAL: Detected lcore 60 as core 24 on socket 1 00:06:40.094 EAL: Detected lcore 61 as core 25 on socket 1 00:06:40.094 EAL: Detected lcore 62 as core 26 on socket 1 00:06:40.094 EAL: Detected lcore 63 as core 27 on socket 1 00:06:40.094 EAL: Detected lcore 64 as core 28 on socket 1 00:06:40.094 EAL: Detected lcore 65 as core 29 on socket 1 00:06:40.094 EAL: Detected lcore 66 as core 30 on socket 1 00:06:40.094 EAL: Detected lcore 67 as core 31 on socket 1 00:06:40.094 EAL: Detected lcore 68 as core 32 on socket 1 00:06:40.094 EAL: Detected lcore 69 as core 33 on socket 1 00:06:40.094 EAL: Detected lcore 70 as core 34 on socket 1 00:06:40.094 EAL: Detected lcore 71 as core 35 on socket 1 00:06:40.094 EAL: Detected lcore 72 as core 0 on socket 0 00:06:40.094 EAL: Detected lcore 73 as core 1 on socket 0 00:06:40.094 EAL: Detected lcore 74 as core 2 on socket 0 00:06:40.094 EAL: Detected lcore 75 as core 3 on socket 0 00:06:40.094 EAL: Detected lcore 76 as core 4 on socket 0 00:06:40.094 EAL: Detected lcore 77 as core 5 on socket 0 00:06:40.094 EAL: Detected lcore 78 as core 6 on socket 0 00:06:40.094 EAL: Detected lcore 79 as core 7 on socket 0 00:06:40.094 EAL: Detected lcore 80 as core 8 on socket 0 00:06:40.094 EAL: Detected lcore 81 as core 9 on socket 0 00:06:40.094 EAL: Detected lcore 82 as core 10 on socket 0 00:06:40.094 EAL: Detected lcore 83 as core 11 on socket 0 00:06:40.094 EAL: Detected lcore 84 as core 12 on socket 0 00:06:40.094 EAL: Detected lcore 85 as core 13 on socket 0 00:06:40.094 EAL: Detected lcore 86 as core 14 on socket 0 00:06:40.094 EAL: Detected lcore 87 as core 15 on socket 0 00:06:40.094 EAL: Detected lcore 88 as core 16 on socket 0 00:06:40.094 EAL: Detected lcore 89 as core 17 on socket 0 00:06:40.094 EAL: Detected lcore 90 as core 18 on socket 0 00:06:40.094 EAL: Detected lcore 91 as core 19 on socket 0 00:06:40.094 EAL: Detected lcore 92 as core 20 on socket 0 00:06:40.094 EAL: Detected lcore 93 as core 21 on socket 0 00:06:40.094 EAL: Detected lcore 94 as core 22 on socket 0 00:06:40.094 EAL: Detected lcore 95 as core 23 on socket 0 00:06:40.094 EAL: Detected lcore 96 as core 24 on socket 0 00:06:40.094 EAL: Detected lcore 97 as core 25 on socket 0 00:06:40.094 EAL: Detected lcore 98 as core 26 on socket 0 00:06:40.094 EAL: Detected lcore 99 as core 27 on socket 0 00:06:40.094 EAL: Detected lcore 100 as core 28 on socket 0 00:06:40.094 EAL: Detected lcore 101 as core 29 on socket 0 00:06:40.094 EAL: Detected lcore 102 as core 30 on socket 0 00:06:40.094 EAL: Detected lcore 103 as core 31 on socket 0 00:06:40.094 EAL: Detected lcore 104 as core 32 on socket 0 00:06:40.094 EAL: Detected lcore 105 as core 33 on socket 0 00:06:40.094 EAL: Detected lcore 106 as core 34 on socket 0 00:06:40.094 EAL: Detected lcore 107 as core 35 on socket 0 00:06:40.094 EAL: Detected lcore 108 as core 0 on socket 1 00:06:40.094 EAL: Detected lcore 109 as core 1 on socket 1 00:06:40.094 EAL: Detected lcore 110 as core 2 on socket 1 00:06:40.094 EAL: Detected lcore 111 as core 3 on socket 1 00:06:40.094 EAL: Detected lcore 112 as core 4 on socket 1 00:06:40.094 EAL: Detected lcore 113 as core 5 on socket 1 00:06:40.094 EAL: Detected lcore 114 as core 6 on socket 1 00:06:40.094 EAL: Detected lcore 115 as core 7 on socket 1 00:06:40.094 EAL: Detected lcore 116 as core 8 on socket 1 00:06:40.094 EAL: Detected lcore 117 as core 9 on socket 1 00:06:40.094 EAL: Detected lcore 118 as core 10 on socket 1 00:06:40.094 EAL: Detected lcore 119 as core 11 on socket 1 00:06:40.094 EAL: Detected lcore 120 as core 12 on socket 1 00:06:40.094 EAL: Detected lcore 121 as core 13 on socket 1 00:06:40.095 EAL: Detected lcore 122 as core 14 on socket 1 00:06:40.095 EAL: Detected lcore 123 as core 15 on socket 1 00:06:40.095 EAL: Detected lcore 124 as core 16 on socket 1 00:06:40.095 EAL: Detected lcore 125 as core 17 on socket 1 00:06:40.095 EAL: Detected lcore 126 as core 18 on socket 1 00:06:40.095 EAL: Detected lcore 127 as core 19 on socket 1 00:06:40.095 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:40.095 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:40.095 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:40.095 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:40.095 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:40.095 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:40.095 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:40.095 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:40.095 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:40.095 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:40.095 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:40.095 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:40.095 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:40.095 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:40.095 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:40.095 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:40.095 EAL: Maximum logical cores by configuration: 128 00:06:40.095 EAL: Detected CPU lcores: 128 00:06:40.095 EAL: Detected NUMA nodes: 2 00:06:40.095 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:40.095 EAL: Detected shared linkage of DPDK 00:06:40.095 EAL: No shared files mode enabled, IPC will be disabled 00:06:40.095 EAL: Bus pci wants IOVA as 'DC' 00:06:40.095 EAL: Buses did not request a specific IOVA mode. 00:06:40.095 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:40.095 EAL: Selected IOVA mode 'VA' 00:06:40.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.095 EAL: Probing VFIO support... 00:06:40.095 EAL: IOMMU type 1 (Type 1) is supported 00:06:40.095 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:40.095 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:40.095 EAL: VFIO support initialized 00:06:40.095 EAL: Ask a virtual area of 0x2e000 bytes 00:06:40.095 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:40.095 EAL: Setting up physically contiguous memory... 00:06:40.095 EAL: Setting maximum number of open files to 524288 00:06:40.095 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:40.095 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:40.095 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:40.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.095 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:40.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:40.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.095 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:40.095 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:40.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.095 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:40.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:40.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.095 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:40.095 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:40.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.095 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:40.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:40.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.095 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:40.095 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:40.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.095 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:40.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:40.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.095 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:40.095 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:40.095 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:40.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.095 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:40.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:40.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.095 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:40.095 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:40.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.095 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:40.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:40.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.095 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:40.095 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:40.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.095 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:40.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:40.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.095 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:40.095 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:40.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.095 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:40.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:40.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.095 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:40.095 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:40.095 EAL: Hugepages will be freed exactly as allocated. 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: TSC frequency is ~2400000 KHz 00:06:40.095 EAL: Main lcore 0 is ready (tid=7f07d4cd4a00;cpuset=[0]) 00:06:40.095 EAL: Trying to obtain current memory policy. 00:06:40.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.095 EAL: Restoring previous memory policy: 0 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was expanded by 2MB 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:40.095 EAL: Mem event callback 'spdk:(nil)' registered 00:06:40.095 00:06:40.095 00:06:40.095 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.095 http://cunit.sourceforge.net/ 00:06:40.095 00:06:40.095 00:06:40.095 Suite: components_suite 00:06:40.095 Test: vtophys_malloc_test ...passed 00:06:40.095 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:40.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.095 EAL: Restoring previous memory policy: 4 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was expanded by 4MB 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was shrunk by 4MB 00:06:40.095 EAL: Trying to obtain current memory policy. 00:06:40.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.095 EAL: Restoring previous memory policy: 4 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was expanded by 6MB 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was shrunk by 6MB 00:06:40.095 EAL: Trying to obtain current memory policy. 00:06:40.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.095 EAL: Restoring previous memory policy: 4 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was expanded by 10MB 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was shrunk by 10MB 00:06:40.095 EAL: Trying to obtain current memory policy. 00:06:40.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.095 EAL: Restoring previous memory policy: 4 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was expanded by 18MB 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was shrunk by 18MB 00:06:40.095 EAL: Trying to obtain current memory policy. 00:06:40.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.095 EAL: Restoring previous memory policy: 4 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was expanded by 34MB 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was shrunk by 34MB 00:06:40.095 EAL: Trying to obtain current memory policy. 00:06:40.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.095 EAL: Restoring previous memory policy: 4 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was expanded by 66MB 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.095 EAL: request: mp_malloc_sync 00:06:40.095 EAL: No shared files mode enabled, IPC is disabled 00:06:40.095 EAL: Heap on socket 0 was shrunk by 66MB 00:06:40.095 EAL: Trying to obtain current memory policy. 00:06:40.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.095 EAL: Restoring previous memory policy: 4 00:06:40.095 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.096 EAL: request: mp_malloc_sync 00:06:40.096 EAL: No shared files mode enabled, IPC is disabled 00:06:40.096 EAL: Heap on socket 0 was expanded by 130MB 00:06:40.356 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.356 EAL: request: mp_malloc_sync 00:06:40.356 EAL: No shared files mode enabled, IPC is disabled 00:06:40.356 EAL: Heap on socket 0 was shrunk by 130MB 00:06:40.356 EAL: Trying to obtain current memory policy. 00:06:40.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.356 EAL: Restoring previous memory policy: 4 00:06:40.356 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.356 EAL: request: mp_malloc_sync 00:06:40.356 EAL: No shared files mode enabled, IPC is disabled 00:06:40.356 EAL: Heap on socket 0 was expanded by 258MB 00:06:40.356 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.356 EAL: request: mp_malloc_sync 00:06:40.356 EAL: No shared files mode enabled, IPC is disabled 00:06:40.356 EAL: Heap on socket 0 was shrunk by 258MB 00:06:40.356 EAL: Trying to obtain current memory policy. 00:06:40.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.356 EAL: Restoring previous memory policy: 4 00:06:40.356 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.356 EAL: request: mp_malloc_sync 00:06:40.356 EAL: No shared files mode enabled, IPC is disabled 00:06:40.356 EAL: Heap on socket 0 was expanded by 514MB 00:06:40.356 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.617 EAL: request: mp_malloc_sync 00:06:40.617 EAL: No shared files mode enabled, IPC is disabled 00:06:40.617 EAL: Heap on socket 0 was shrunk by 514MB 00:06:40.617 EAL: Trying to obtain current memory policy. 00:06:40.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.617 EAL: Restoring previous memory policy: 4 00:06:40.617 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.617 EAL: request: mp_malloc_sync 00:06:40.617 EAL: No shared files mode enabled, IPC is disabled 00:06:40.617 EAL: Heap on socket 0 was expanded by 1026MB 00:06:40.617 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.877 EAL: request: mp_malloc_sync 00:06:40.877 EAL: No shared files mode enabled, IPC is disabled 00:06:40.877 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:40.877 passed 00:06:40.877 00:06:40.877 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.877 suites 1 1 n/a 0 0 00:06:40.877 tests 2 2 2 0 0 00:06:40.877 asserts 497 497 497 0 n/a 00:06:40.877 00:06:40.877 Elapsed time = 0.664 seconds 00:06:40.877 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.877 EAL: request: mp_malloc_sync 00:06:40.877 EAL: No shared files mode enabled, IPC is disabled 00:06:40.877 EAL: Heap on socket 0 was shrunk by 2MB 00:06:40.877 EAL: No shared files mode enabled, IPC is disabled 00:06:40.877 EAL: No shared files mode enabled, IPC is disabled 00:06:40.877 EAL: No shared files mode enabled, IPC is disabled 00:06:40.877 00:06:40.877 real 0m0.788s 00:06:40.877 user 0m0.406s 00:06:40.877 sys 0m0.354s 00:06:40.877 11:15:09 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.877 11:15:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:40.877 ************************************ 00:06:40.877 END TEST env_vtophys 00:06:40.877 ************************************ 00:06:40.877 11:15:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:40.877 11:15:09 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:40.877 11:15:09 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.877 11:15:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.877 ************************************ 00:06:40.877 START TEST env_pci 00:06:40.877 ************************************ 00:06:40.877 11:15:09 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:40.877 00:06:40.877 00:06:40.877 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.877 http://cunit.sourceforge.net/ 00:06:40.877 00:06:40.877 00:06:40.877 Suite: pci 00:06:40.877 Test: pci_hook ...[2024-06-10 11:15:09.781954] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3402176 has claimed it 00:06:40.877 EAL: Cannot find device (10000:00:01.0) 00:06:40.877 EAL: Failed to attach device on primary process 00:06:40.877 passed 00:06:40.877 00:06:40.877 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.877 suites 1 1 n/a 0 0 00:06:40.877 tests 1 1 1 0 0 00:06:40.877 asserts 25 25 25 0 n/a 00:06:40.877 00:06:40.877 Elapsed time = 0.035 seconds 00:06:40.877 00:06:40.877 real 0m0.056s 00:06:40.877 user 0m0.019s 00:06:40.877 sys 0m0.036s 00:06:40.877 11:15:09 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.877 11:15:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:40.877 ************************************ 00:06:40.877 END TEST env_pci 00:06:40.877 ************************************ 00:06:41.138 11:15:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:41.138 11:15:09 env -- env/env.sh@15 -- # uname 00:06:41.138 11:15:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:41.138 11:15:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:41.138 11:15:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:41.138 11:15:09 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:06:41.138 11:15:09 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.138 11:15:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:41.138 ************************************ 00:06:41.138 START TEST env_dpdk_post_init 00:06:41.138 ************************************ 00:06:41.138 11:15:09 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:41.138 EAL: Detected CPU lcores: 128 00:06:41.138 EAL: Detected NUMA nodes: 2 00:06:41.138 EAL: Detected shared linkage of DPDK 00:06:41.138 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:41.138 EAL: Selected IOVA mode 'VA' 00:06:41.138 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.138 EAL: VFIO support initialized 00:06:41.138 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:41.138 EAL: Using IOMMU type 1 (Type 1) 00:06:41.399 EAL: Ignore mapping IO port bar(1) 00:06:41.399 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:41.399 EAL: Ignore mapping IO port bar(1) 00:06:41.659 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:41.659 EAL: Ignore mapping IO port bar(1) 00:06:41.919 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:41.919 EAL: Ignore mapping IO port bar(1) 00:06:42.179 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:42.179 EAL: Ignore mapping IO port bar(1) 00:06:42.179 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:42.439 EAL: Ignore mapping IO port bar(1) 00:06:42.439 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:42.700 EAL: Ignore mapping IO port bar(1) 00:06:42.700 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:42.960 EAL: Ignore mapping IO port bar(1) 00:06:42.960 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:43.220 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:43.220 EAL: Ignore mapping IO port bar(1) 00:06:43.481 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:43.481 EAL: Ignore mapping IO port bar(1) 00:06:43.742 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:43.742 EAL: Ignore mapping IO port bar(1) 00:06:43.742 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:44.003 EAL: Ignore mapping IO port bar(1) 00:06:44.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:44.264 EAL: Ignore mapping IO port bar(1) 00:06:44.264 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:44.524 EAL: Ignore mapping IO port bar(1) 00:06:44.524 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:44.785 EAL: Ignore mapping IO port bar(1) 00:06:44.785 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:44.785 EAL: Ignore mapping IO port bar(1) 00:06:45.045 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:45.045 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:45.045 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:45.045 Starting DPDK initialization... 00:06:45.045 Starting SPDK post initialization... 00:06:45.045 SPDK NVMe probe 00:06:45.045 Attaching to 0000:65:00.0 00:06:45.045 Attached to 0000:65:00.0 00:06:45.045 Cleaning up... 00:06:46.959 00:06:46.959 real 0m5.716s 00:06:46.959 user 0m0.191s 00:06:46.959 sys 0m0.066s 00:06:46.959 11:15:15 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.959 11:15:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:46.959 ************************************ 00:06:46.959 END TEST env_dpdk_post_init 00:06:46.959 ************************************ 00:06:46.959 11:15:15 env -- env/env.sh@26 -- # uname 00:06:46.959 11:15:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:46.959 11:15:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:46.959 11:15:15 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:46.959 11:15:15 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.959 11:15:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.959 ************************************ 00:06:46.959 START TEST env_mem_callbacks 00:06:46.959 ************************************ 00:06:46.960 11:15:15 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:46.960 EAL: Detected CPU lcores: 128 00:06:46.960 EAL: Detected NUMA nodes: 2 00:06:46.960 EAL: Detected shared linkage of DPDK 00:06:46.960 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:46.960 EAL: Selected IOVA mode 'VA' 00:06:46.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.960 EAL: VFIO support initialized 00:06:46.960 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:46.960 00:06:46.960 00:06:46.960 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.960 http://cunit.sourceforge.net/ 00:06:46.960 00:06:46.960 00:06:46.960 Suite: memory 00:06:46.960 Test: test ... 00:06:46.960 register 0x200000200000 2097152 00:06:46.960 malloc 3145728 00:06:46.960 register 0x200000400000 4194304 00:06:46.960 buf 0x200000500000 len 3145728 PASSED 00:06:46.960 malloc 64 00:06:46.960 buf 0x2000004fff40 len 64 PASSED 00:06:46.960 malloc 4194304 00:06:46.960 register 0x200000800000 6291456 00:06:46.960 buf 0x200000a00000 len 4194304 PASSED 00:06:46.960 free 0x200000500000 3145728 00:06:46.960 free 0x2000004fff40 64 00:06:46.960 unregister 0x200000400000 4194304 PASSED 00:06:46.960 free 0x200000a00000 4194304 00:06:46.960 unregister 0x200000800000 6291456 PASSED 00:06:46.960 malloc 8388608 00:06:46.960 register 0x200000400000 10485760 00:06:46.960 buf 0x200000600000 len 8388608 PASSED 00:06:46.960 free 0x200000600000 8388608 00:06:46.960 unregister 0x200000400000 10485760 PASSED 00:06:46.960 passed 00:06:46.960 00:06:46.960 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.960 suites 1 1 n/a 0 0 00:06:46.960 tests 1 1 1 0 0 00:06:46.960 asserts 15 15 15 0 n/a 00:06:46.960 00:06:46.960 Elapsed time = 0.008 seconds 00:06:46.960 00:06:46.960 real 0m0.064s 00:06:46.960 user 0m0.019s 00:06:46.960 sys 0m0.044s 00:06:46.960 11:15:15 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.960 11:15:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:46.960 ************************************ 00:06:46.960 END TEST env_mem_callbacks 00:06:46.960 ************************************ 00:06:46.960 00:06:46.960 real 0m7.340s 00:06:46.960 user 0m1.038s 00:06:46.960 sys 0m0.843s 00:06:46.960 11:15:15 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.960 11:15:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.960 ************************************ 00:06:46.960 END TEST env 00:06:46.960 ************************************ 00:06:46.960 11:15:15 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:46.960 11:15:15 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:46.960 11:15:15 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.960 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:46.960 ************************************ 00:06:46.960 START TEST rpc 00:06:46.960 ************************************ 00:06:46.960 11:15:15 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:47.221 * Looking for test storage... 00:06:47.221 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:47.221 11:15:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3403790 00:06:47.221 11:15:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.221 11:15:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3403790 00:06:47.221 11:15:15 rpc -- common/autotest_common.sh@830 -- # '[' -z 3403790 ']' 00:06:47.221 11:15:15 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.221 11:15:15 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:47.221 11:15:15 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.221 11:15:15 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:47.221 11:15:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.221 11:15:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:47.221 [2024-06-10 11:15:16.021751] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:06:47.221 [2024-06-10 11:15:16.021830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403790 ] 00:06:47.221 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.221 [2024-06-10 11:15:16.082995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.221 [2024-06-10 11:15:16.148658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:47.221 [2024-06-10 11:15:16.148698] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3403790' to capture a snapshot of events at runtime. 00:06:47.221 [2024-06-10 11:15:16.148705] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.221 [2024-06-10 11:15:16.148712] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.221 [2024-06-10 11:15:16.148717] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3403790 for offline analysis/debug. 00:06:47.221 [2024-06-10 11:15:16.148738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.793 11:15:16 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:47.793 11:15:16 rpc -- common/autotest_common.sh@863 -- # return 0 00:06:47.793 11:15:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:47.793 11:15:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:47.793 11:15:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:47.793 11:15:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:47.793 11:15:16 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:47.793 11:15:16 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.793 11:15:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.054 ************************************ 00:06:48.054 START TEST rpc_integrity 00:06:48.054 ************************************ 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:06:48.054 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.054 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:48.054 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:48.054 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:48.054 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.054 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:48.054 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.054 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.054 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:48.054 { 00:06:48.054 "name": "Malloc0", 00:06:48.054 "aliases": [ 00:06:48.054 "a61b8702-4dac-4fd4-b1a3-6b348b169c29" 00:06:48.054 ], 00:06:48.055 "product_name": "Malloc disk", 00:06:48.055 "block_size": 512, 00:06:48.055 "num_blocks": 16384, 00:06:48.055 "uuid": "a61b8702-4dac-4fd4-b1a3-6b348b169c29", 00:06:48.055 "assigned_rate_limits": { 00:06:48.055 "rw_ios_per_sec": 0, 00:06:48.055 "rw_mbytes_per_sec": 0, 00:06:48.055 "r_mbytes_per_sec": 0, 00:06:48.055 "w_mbytes_per_sec": 0 00:06:48.055 }, 00:06:48.055 "claimed": false, 00:06:48.055 "zoned": false, 00:06:48.055 "supported_io_types": { 00:06:48.055 "read": true, 00:06:48.055 "write": true, 00:06:48.055 "unmap": true, 00:06:48.055 "write_zeroes": true, 00:06:48.055 "flush": true, 00:06:48.055 "reset": true, 00:06:48.055 "compare": false, 00:06:48.055 "compare_and_write": false, 00:06:48.055 "abort": true, 00:06:48.055 "nvme_admin": false, 00:06:48.055 "nvme_io": false 00:06:48.055 }, 00:06:48.055 "memory_domains": [ 00:06:48.055 { 00:06:48.055 "dma_device_id": "system", 00:06:48.055 "dma_device_type": 1 00:06:48.055 }, 00:06:48.055 { 00:06:48.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.055 "dma_device_type": 2 00:06:48.055 } 00:06:48.055 ], 00:06:48.055 "driver_specific": {} 00:06:48.055 } 00:06:48.055 ]' 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 [2024-06-10 11:15:16.912822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:48.055 [2024-06-10 11:15:16.912855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.055 [2024-06-10 11:15:16.912867] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22e62d0 00:06:48.055 [2024-06-10 11:15:16.912874] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.055 [2024-06-10 11:15:16.914217] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.055 [2024-06-10 11:15:16.914237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:48.055 Passthru0 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:48.055 { 00:06:48.055 "name": "Malloc0", 00:06:48.055 "aliases": [ 00:06:48.055 "a61b8702-4dac-4fd4-b1a3-6b348b169c29" 00:06:48.055 ], 00:06:48.055 "product_name": "Malloc disk", 00:06:48.055 "block_size": 512, 00:06:48.055 "num_blocks": 16384, 00:06:48.055 "uuid": "a61b8702-4dac-4fd4-b1a3-6b348b169c29", 00:06:48.055 "assigned_rate_limits": { 00:06:48.055 "rw_ios_per_sec": 0, 00:06:48.055 "rw_mbytes_per_sec": 0, 00:06:48.055 "r_mbytes_per_sec": 0, 00:06:48.055 "w_mbytes_per_sec": 0 00:06:48.055 }, 00:06:48.055 "claimed": true, 00:06:48.055 "claim_type": "exclusive_write", 00:06:48.055 "zoned": false, 00:06:48.055 "supported_io_types": { 00:06:48.055 "read": true, 00:06:48.055 "write": true, 00:06:48.055 "unmap": true, 00:06:48.055 "write_zeroes": true, 00:06:48.055 "flush": true, 00:06:48.055 "reset": true, 00:06:48.055 "compare": false, 00:06:48.055 "compare_and_write": false, 00:06:48.055 "abort": true, 00:06:48.055 "nvme_admin": false, 00:06:48.055 "nvme_io": false 00:06:48.055 }, 00:06:48.055 "memory_domains": [ 00:06:48.055 { 00:06:48.055 "dma_device_id": "system", 00:06:48.055 "dma_device_type": 1 00:06:48.055 }, 00:06:48.055 { 00:06:48.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.055 "dma_device_type": 2 00:06:48.055 } 00:06:48.055 ], 00:06:48.055 "driver_specific": {} 00:06:48.055 }, 00:06:48.055 { 00:06:48.055 "name": "Passthru0", 00:06:48.055 "aliases": [ 00:06:48.055 "25355003-cc62-5259-8570-ae391b0864bb" 00:06:48.055 ], 00:06:48.055 "product_name": "passthru", 00:06:48.055 "block_size": 512, 00:06:48.055 "num_blocks": 16384, 00:06:48.055 "uuid": "25355003-cc62-5259-8570-ae391b0864bb", 00:06:48.055 "assigned_rate_limits": { 00:06:48.055 "rw_ios_per_sec": 0, 00:06:48.055 "rw_mbytes_per_sec": 0, 00:06:48.055 "r_mbytes_per_sec": 0, 00:06:48.055 "w_mbytes_per_sec": 0 00:06:48.055 }, 00:06:48.055 "claimed": false, 00:06:48.055 "zoned": false, 00:06:48.055 "supported_io_types": { 00:06:48.055 "read": true, 00:06:48.055 "write": true, 00:06:48.055 "unmap": true, 00:06:48.055 "write_zeroes": true, 00:06:48.055 "flush": true, 00:06:48.055 "reset": true, 00:06:48.055 "compare": false, 00:06:48.055 "compare_and_write": false, 00:06:48.055 "abort": true, 00:06:48.055 "nvme_admin": false, 00:06:48.055 "nvme_io": false 00:06:48.055 }, 00:06:48.055 "memory_domains": [ 00:06:48.055 { 00:06:48.055 "dma_device_id": "system", 00:06:48.055 "dma_device_type": 1 00:06:48.055 }, 00:06:48.055 { 00:06:48.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.055 "dma_device_type": 2 00:06:48.055 } 00:06:48.055 ], 00:06:48.055 "driver_specific": { 00:06:48.055 "passthru": { 00:06:48.055 "name": "Passthru0", 00:06:48.055 "base_bdev_name": "Malloc0" 00:06:48.055 } 00:06:48.055 } 00:06:48.055 } 00:06:48.055 ]' 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 11:15:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:48.055 11:15:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:48.316 11:15:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:48.316 00:06:48.316 real 0m0.255s 00:06:48.316 user 0m0.159s 00:06:48.316 sys 0m0.030s 00:06:48.316 11:15:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.316 11:15:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.316 ************************************ 00:06:48.316 END TEST rpc_integrity 00:06:48.316 ************************************ 00:06:48.316 11:15:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:48.316 11:15:17 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.316 11:15:17 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.316 11:15:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.316 ************************************ 00:06:48.316 START TEST rpc_plugins 00:06:48.316 ************************************ 00:06:48.316 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:06:48.316 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:48.316 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.316 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.316 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.316 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:48.316 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:48.316 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.316 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.316 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.316 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:48.316 { 00:06:48.316 "name": "Malloc1", 00:06:48.316 "aliases": [ 00:06:48.316 "9a44f269-bb37-4134-b669-a4a2ccb1b677" 00:06:48.316 ], 00:06:48.316 "product_name": "Malloc disk", 00:06:48.316 "block_size": 4096, 00:06:48.316 "num_blocks": 256, 00:06:48.316 "uuid": "9a44f269-bb37-4134-b669-a4a2ccb1b677", 00:06:48.316 "assigned_rate_limits": { 00:06:48.316 "rw_ios_per_sec": 0, 00:06:48.316 "rw_mbytes_per_sec": 0, 00:06:48.316 "r_mbytes_per_sec": 0, 00:06:48.316 "w_mbytes_per_sec": 0 00:06:48.316 }, 00:06:48.316 "claimed": false, 00:06:48.316 "zoned": false, 00:06:48.316 "supported_io_types": { 00:06:48.316 "read": true, 00:06:48.316 "write": true, 00:06:48.316 "unmap": true, 00:06:48.316 "write_zeroes": true, 00:06:48.316 "flush": true, 00:06:48.316 "reset": true, 00:06:48.317 "compare": false, 00:06:48.317 "compare_and_write": false, 00:06:48.317 "abort": true, 00:06:48.317 "nvme_admin": false, 00:06:48.317 "nvme_io": false 00:06:48.317 }, 00:06:48.317 "memory_domains": [ 00:06:48.317 { 00:06:48.317 "dma_device_id": "system", 00:06:48.317 "dma_device_type": 1 00:06:48.317 }, 00:06:48.317 { 00:06:48.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.317 "dma_device_type": 2 00:06:48.317 } 00:06:48.317 ], 00:06:48.317 "driver_specific": {} 00:06:48.317 } 00:06:48.317 ]' 00:06:48.317 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:48.317 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:48.317 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:48.317 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.317 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.317 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:48.317 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.317 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.317 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:48.317 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:48.317 11:15:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:48.317 00:06:48.317 real 0m0.128s 00:06:48.317 user 0m0.080s 00:06:48.317 sys 0m0.012s 00:06:48.317 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.317 11:15:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 ************************************ 00:06:48.317 END TEST rpc_plugins 00:06:48.317 ************************************ 00:06:48.317 11:15:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:48.317 11:15:17 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.317 11:15:17 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.317 11:15:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.577 ************************************ 00:06:48.577 START TEST rpc_trace_cmd_test 00:06:48.577 ************************************ 00:06:48.577 11:15:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:06:48.577 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:48.577 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:48.577 11:15:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.577 11:15:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.577 11:15:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.577 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:48.577 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3403790", 00:06:48.577 "tpoint_group_mask": "0x8", 00:06:48.577 "iscsi_conn": { 00:06:48.577 "mask": "0x2", 00:06:48.577 "tpoint_mask": "0x0" 00:06:48.577 }, 00:06:48.577 "scsi": { 00:06:48.577 "mask": "0x4", 00:06:48.577 "tpoint_mask": "0x0" 00:06:48.577 }, 00:06:48.577 "bdev": { 00:06:48.577 "mask": "0x8", 00:06:48.577 "tpoint_mask": "0xffffffffffffffff" 00:06:48.577 }, 00:06:48.577 "nvmf_rdma": { 00:06:48.577 "mask": "0x10", 00:06:48.577 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "nvmf_tcp": { 00:06:48.578 "mask": "0x20", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "ftl": { 00:06:48.578 "mask": "0x40", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "blobfs": { 00:06:48.578 "mask": "0x80", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "dsa": { 00:06:48.578 "mask": "0x200", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "thread": { 00:06:48.578 "mask": "0x400", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "nvme_pcie": { 00:06:48.578 "mask": "0x800", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "iaa": { 00:06:48.578 "mask": "0x1000", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "nvme_tcp": { 00:06:48.578 "mask": "0x2000", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "bdev_nvme": { 00:06:48.578 "mask": "0x4000", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 }, 00:06:48.578 "sock": { 00:06:48.578 "mask": "0x8000", 00:06:48.578 "tpoint_mask": "0x0" 00:06:48.578 } 00:06:48.578 }' 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:48.578 00:06:48.578 real 0m0.240s 00:06:48.578 user 0m0.203s 00:06:48.578 sys 0m0.030s 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.578 11:15:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.578 ************************************ 00:06:48.578 END TEST rpc_trace_cmd_test 00:06:48.578 ************************************ 00:06:48.839 11:15:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:48.839 11:15:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:48.839 11:15:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:48.839 11:15:17 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.839 11:15:17 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.839 11:15:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.839 ************************************ 00:06:48.839 START TEST rpc_daemon_integrity 00:06:48.839 ************************************ 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.839 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:48.839 { 00:06:48.839 "name": "Malloc2", 00:06:48.839 "aliases": [ 00:06:48.839 "281cff62-c3aa-48a8-96ff-bcff70313504" 00:06:48.839 ], 00:06:48.839 "product_name": "Malloc disk", 00:06:48.839 "block_size": 512, 00:06:48.839 "num_blocks": 16384, 00:06:48.839 "uuid": "281cff62-c3aa-48a8-96ff-bcff70313504", 00:06:48.839 "assigned_rate_limits": { 00:06:48.839 "rw_ios_per_sec": 0, 00:06:48.839 "rw_mbytes_per_sec": 0, 00:06:48.839 "r_mbytes_per_sec": 0, 00:06:48.839 "w_mbytes_per_sec": 0 00:06:48.839 }, 00:06:48.839 "claimed": false, 00:06:48.839 "zoned": false, 00:06:48.839 "supported_io_types": { 00:06:48.839 "read": true, 00:06:48.839 "write": true, 00:06:48.839 "unmap": true, 00:06:48.839 "write_zeroes": true, 00:06:48.839 "flush": true, 00:06:48.840 "reset": true, 00:06:48.840 "compare": false, 00:06:48.840 "compare_and_write": false, 00:06:48.840 "abort": true, 00:06:48.840 "nvme_admin": false, 00:06:48.840 "nvme_io": false 00:06:48.840 }, 00:06:48.840 "memory_domains": [ 00:06:48.840 { 00:06:48.840 "dma_device_id": "system", 00:06:48.840 "dma_device_type": 1 00:06:48.840 }, 00:06:48.840 { 00:06:48.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.840 "dma_device_type": 2 00:06:48.840 } 00:06:48.840 ], 00:06:48.840 "driver_specific": {} 00:06:48.840 } 00:06:48.840 ]' 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.840 [2024-06-10 11:15:17.751097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:48.840 [2024-06-10 11:15:17.751124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.840 [2024-06-10 11:15:17.751140] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22e7930 00:06:48.840 [2024-06-10 11:15:17.751147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.840 [2024-06-10 11:15:17.752350] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.840 [2024-06-10 11:15:17.752369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:48.840 Passthru0 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:48.840 { 00:06:48.840 "name": "Malloc2", 00:06:48.840 "aliases": [ 00:06:48.840 "281cff62-c3aa-48a8-96ff-bcff70313504" 00:06:48.840 ], 00:06:48.840 "product_name": "Malloc disk", 00:06:48.840 "block_size": 512, 00:06:48.840 "num_blocks": 16384, 00:06:48.840 "uuid": "281cff62-c3aa-48a8-96ff-bcff70313504", 00:06:48.840 "assigned_rate_limits": { 00:06:48.840 "rw_ios_per_sec": 0, 00:06:48.840 "rw_mbytes_per_sec": 0, 00:06:48.840 "r_mbytes_per_sec": 0, 00:06:48.840 "w_mbytes_per_sec": 0 00:06:48.840 }, 00:06:48.840 "claimed": true, 00:06:48.840 "claim_type": "exclusive_write", 00:06:48.840 "zoned": false, 00:06:48.840 "supported_io_types": { 00:06:48.840 "read": true, 00:06:48.840 "write": true, 00:06:48.840 "unmap": true, 00:06:48.840 "write_zeroes": true, 00:06:48.840 "flush": true, 00:06:48.840 "reset": true, 00:06:48.840 "compare": false, 00:06:48.840 "compare_and_write": false, 00:06:48.840 "abort": true, 00:06:48.840 "nvme_admin": false, 00:06:48.840 "nvme_io": false 00:06:48.840 }, 00:06:48.840 "memory_domains": [ 00:06:48.840 { 00:06:48.840 "dma_device_id": "system", 00:06:48.840 "dma_device_type": 1 00:06:48.840 }, 00:06:48.840 { 00:06:48.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.840 "dma_device_type": 2 00:06:48.840 } 00:06:48.840 ], 00:06:48.840 "driver_specific": {} 00:06:48.840 }, 00:06:48.840 { 00:06:48.840 "name": "Passthru0", 00:06:48.840 "aliases": [ 00:06:48.840 "1181588c-1bb8-5eb8-aa20-fde992ec6347" 00:06:48.840 ], 00:06:48.840 "product_name": "passthru", 00:06:48.840 "block_size": 512, 00:06:48.840 "num_blocks": 16384, 00:06:48.840 "uuid": "1181588c-1bb8-5eb8-aa20-fde992ec6347", 00:06:48.840 "assigned_rate_limits": { 00:06:48.840 "rw_ios_per_sec": 0, 00:06:48.840 "rw_mbytes_per_sec": 0, 00:06:48.840 "r_mbytes_per_sec": 0, 00:06:48.840 "w_mbytes_per_sec": 0 00:06:48.840 }, 00:06:48.840 "claimed": false, 00:06:48.840 "zoned": false, 00:06:48.840 "supported_io_types": { 00:06:48.840 "read": true, 00:06:48.840 "write": true, 00:06:48.840 "unmap": true, 00:06:48.840 "write_zeroes": true, 00:06:48.840 "flush": true, 00:06:48.840 "reset": true, 00:06:48.840 "compare": false, 00:06:48.840 "compare_and_write": false, 00:06:48.840 "abort": true, 00:06:48.840 "nvme_admin": false, 00:06:48.840 "nvme_io": false 00:06:48.840 }, 00:06:48.840 "memory_domains": [ 00:06:48.840 { 00:06:48.840 "dma_device_id": "system", 00:06:48.840 "dma_device_type": 1 00:06:48.840 }, 00:06:48.840 { 00:06:48.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.840 "dma_device_type": 2 00:06:48.840 } 00:06:48.840 ], 00:06:48.840 "driver_specific": { 00:06:48.840 "passthru": { 00:06:48.840 "name": "Passthru0", 00:06:48.840 "base_bdev_name": "Malloc2" 00:06:48.840 } 00:06:48.840 } 00:06:48.840 } 00:06:48.840 ]' 00:06:48.840 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:49.102 00:06:49.102 real 0m0.276s 00:06:49.102 user 0m0.180s 00:06:49.102 sys 0m0.031s 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.102 11:15:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.102 ************************************ 00:06:49.102 END TEST rpc_daemon_integrity 00:06:49.102 ************************************ 00:06:49.102 11:15:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:49.102 11:15:17 rpc -- rpc/rpc.sh@84 -- # killprocess 3403790 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@949 -- # '[' -z 3403790 ']' 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@953 -- # kill -0 3403790 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@954 -- # uname 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3403790 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3403790' 00:06:49.102 killing process with pid 3403790 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@968 -- # kill 3403790 00:06:49.102 11:15:17 rpc -- common/autotest_common.sh@973 -- # wait 3403790 00:06:49.362 00:06:49.362 real 0m2.298s 00:06:49.362 user 0m3.018s 00:06:49.362 sys 0m0.606s 00:06:49.362 11:15:18 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.362 11:15:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.362 ************************************ 00:06:49.362 END TEST rpc 00:06:49.362 ************************************ 00:06:49.362 11:15:18 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:49.362 11:15:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:49.362 11:15:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.362 11:15:18 -- common/autotest_common.sh@10 -- # set +x 00:06:49.362 ************************************ 00:06:49.362 START TEST skip_rpc 00:06:49.362 ************************************ 00:06:49.362 11:15:18 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:49.624 * Looking for test storage... 00:06:49.624 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:49.624 11:15:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:49.624 11:15:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:49.624 11:15:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:49.624 11:15:18 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:49.624 11:15:18 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.624 11:15:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.624 ************************************ 00:06:49.624 START TEST skip_rpc 00:06:49.624 ************************************ 00:06:49.624 11:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:06:49.624 11:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3404364 00:06:49.624 11:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.624 11:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:49.624 11:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:49.624 [2024-06-10 11:15:18.445045] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:06:49.624 [2024-06-10 11:15:18.445090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404364 ] 00:06:49.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.624 [2024-06-10 11:15:18.506163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.624 [2024-06-10 11:15:18.570585] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3404364 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 3404364 ']' 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 3404364 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3404364 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3404364' 00:06:54.935 killing process with pid 3404364 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 3404364 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 3404364 00:06:54.935 00:06:54.935 real 0m5.277s 00:06:54.935 user 0m5.093s 00:06:54.935 sys 0m0.221s 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.935 11:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.935 ************************************ 00:06:54.935 END TEST skip_rpc 00:06:54.935 ************************************ 00:06:54.935 11:15:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:54.935 11:15:23 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:54.935 11:15:23 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.935 11:15:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.935 ************************************ 00:06:54.935 START TEST skip_rpc_with_json 00:06:54.935 ************************************ 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3405413 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3405413 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 3405413 ']' 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:54.935 11:15:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.935 [2024-06-10 11:15:23.797139] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:06:54.935 [2024-06-10 11:15:23.797187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405413 ] 00:06:54.935 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.935 [2024-06-10 11:15:23.858125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.218 [2024-06-10 11:15:23.923609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.218 [2024-06-10 11:15:24.107147] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:55.218 request: 00:06:55.218 { 00:06:55.218 "trtype": "tcp", 00:06:55.218 "method": "nvmf_get_transports", 00:06:55.218 "req_id": 1 00:06:55.218 } 00:06:55.218 Got JSON-RPC error response 00:06:55.218 response: 00:06:55.218 { 00:06:55.218 "code": -19, 00:06:55.218 "message": "No such device" 00:06:55.218 } 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.218 [2024-06-10 11:15:24.115259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.218 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.479 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.479 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:55.479 { 00:06:55.479 "subsystems": [ 00:06:55.479 { 00:06:55.479 "subsystem": "keyring", 00:06:55.479 "config": [] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "iobuf", 00:06:55.479 "config": [ 00:06:55.479 { 00:06:55.479 "method": "iobuf_set_options", 00:06:55.479 "params": { 00:06:55.479 "small_pool_count": 8192, 00:06:55.479 "large_pool_count": 1024, 00:06:55.479 "small_bufsize": 8192, 00:06:55.479 "large_bufsize": 135168 00:06:55.479 } 00:06:55.479 } 00:06:55.479 ] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "sock", 00:06:55.479 "config": [ 00:06:55.479 { 00:06:55.479 "method": "sock_set_default_impl", 00:06:55.479 "params": { 00:06:55.479 "impl_name": "posix" 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "sock_impl_set_options", 00:06:55.479 "params": { 00:06:55.479 "impl_name": "ssl", 00:06:55.479 "recv_buf_size": 4096, 00:06:55.479 "send_buf_size": 4096, 00:06:55.479 "enable_recv_pipe": true, 00:06:55.479 "enable_quickack": false, 00:06:55.479 "enable_placement_id": 0, 00:06:55.479 "enable_zerocopy_send_server": true, 00:06:55.479 "enable_zerocopy_send_client": false, 00:06:55.479 "zerocopy_threshold": 0, 00:06:55.479 "tls_version": 0, 00:06:55.479 "enable_ktls": false 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "sock_impl_set_options", 00:06:55.479 "params": { 00:06:55.479 "impl_name": "posix", 00:06:55.479 "recv_buf_size": 2097152, 00:06:55.479 "send_buf_size": 2097152, 00:06:55.479 "enable_recv_pipe": true, 00:06:55.479 "enable_quickack": false, 00:06:55.479 "enable_placement_id": 0, 00:06:55.479 "enable_zerocopy_send_server": true, 00:06:55.479 "enable_zerocopy_send_client": false, 00:06:55.479 "zerocopy_threshold": 0, 00:06:55.479 "tls_version": 0, 00:06:55.479 "enable_ktls": false 00:06:55.479 } 00:06:55.479 } 00:06:55.479 ] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "vmd", 00:06:55.479 "config": [] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "accel", 00:06:55.479 "config": [ 00:06:55.479 { 00:06:55.479 "method": "accel_set_options", 00:06:55.479 "params": { 00:06:55.479 "small_cache_size": 128, 00:06:55.479 "large_cache_size": 16, 00:06:55.479 "task_count": 2048, 00:06:55.479 "sequence_count": 2048, 00:06:55.479 "buf_count": 2048 00:06:55.479 } 00:06:55.479 } 00:06:55.479 ] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "bdev", 00:06:55.479 "config": [ 00:06:55.479 { 00:06:55.479 "method": "bdev_set_options", 00:06:55.479 "params": { 00:06:55.479 "bdev_io_pool_size": 65535, 00:06:55.479 "bdev_io_cache_size": 256, 00:06:55.479 "bdev_auto_examine": true, 00:06:55.479 "iobuf_small_cache_size": 128, 00:06:55.479 "iobuf_large_cache_size": 16 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "bdev_raid_set_options", 00:06:55.479 "params": { 00:06:55.479 "process_window_size_kb": 1024 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "bdev_iscsi_set_options", 00:06:55.479 "params": { 00:06:55.479 "timeout_sec": 30 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "bdev_nvme_set_options", 00:06:55.479 "params": { 00:06:55.479 "action_on_timeout": "none", 00:06:55.479 "timeout_us": 0, 00:06:55.479 "timeout_admin_us": 0, 00:06:55.479 "keep_alive_timeout_ms": 10000, 00:06:55.479 "arbitration_burst": 0, 00:06:55.479 "low_priority_weight": 0, 00:06:55.479 "medium_priority_weight": 0, 00:06:55.479 "high_priority_weight": 0, 00:06:55.479 "nvme_adminq_poll_period_us": 10000, 00:06:55.479 "nvme_ioq_poll_period_us": 0, 00:06:55.479 "io_queue_requests": 0, 00:06:55.479 "delay_cmd_submit": true, 00:06:55.479 "transport_retry_count": 4, 00:06:55.479 "bdev_retry_count": 3, 00:06:55.479 "transport_ack_timeout": 0, 00:06:55.479 "ctrlr_loss_timeout_sec": 0, 00:06:55.479 "reconnect_delay_sec": 0, 00:06:55.479 "fast_io_fail_timeout_sec": 0, 00:06:55.479 "disable_auto_failback": false, 00:06:55.479 "generate_uuids": false, 00:06:55.479 "transport_tos": 0, 00:06:55.479 "nvme_error_stat": false, 00:06:55.479 "rdma_srq_size": 0, 00:06:55.479 "io_path_stat": false, 00:06:55.479 "allow_accel_sequence": false, 00:06:55.479 "rdma_max_cq_size": 0, 00:06:55.479 "rdma_cm_event_timeout_ms": 0, 00:06:55.479 "dhchap_digests": [ 00:06:55.479 "sha256", 00:06:55.479 "sha384", 00:06:55.479 "sha512" 00:06:55.479 ], 00:06:55.479 "dhchap_dhgroups": [ 00:06:55.479 "null", 00:06:55.479 "ffdhe2048", 00:06:55.479 "ffdhe3072", 00:06:55.479 "ffdhe4096", 00:06:55.479 "ffdhe6144", 00:06:55.479 "ffdhe8192" 00:06:55.479 ] 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "bdev_nvme_set_hotplug", 00:06:55.479 "params": { 00:06:55.479 "period_us": 100000, 00:06:55.479 "enable": false 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "bdev_wait_for_examine" 00:06:55.479 } 00:06:55.479 ] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "scsi", 00:06:55.479 "config": null 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "scheduler", 00:06:55.479 "config": [ 00:06:55.479 { 00:06:55.479 "method": "framework_set_scheduler", 00:06:55.479 "params": { 00:06:55.479 "name": "static" 00:06:55.479 } 00:06:55.479 } 00:06:55.479 ] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "vhost_scsi", 00:06:55.479 "config": [] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "vhost_blk", 00:06:55.479 "config": [] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "ublk", 00:06:55.479 "config": [] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "nbd", 00:06:55.479 "config": [] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "nvmf", 00:06:55.479 "config": [ 00:06:55.479 { 00:06:55.479 "method": "nvmf_set_config", 00:06:55.479 "params": { 00:06:55.479 "discovery_filter": "match_any", 00:06:55.479 "admin_cmd_passthru": { 00:06:55.479 "identify_ctrlr": false 00:06:55.479 } 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "nvmf_set_max_subsystems", 00:06:55.479 "params": { 00:06:55.479 "max_subsystems": 1024 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "nvmf_set_crdt", 00:06:55.479 "params": { 00:06:55.479 "crdt1": 0, 00:06:55.479 "crdt2": 0, 00:06:55.479 "crdt3": 0 00:06:55.479 } 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "method": "nvmf_create_transport", 00:06:55.479 "params": { 00:06:55.479 "trtype": "TCP", 00:06:55.479 "max_queue_depth": 128, 00:06:55.479 "max_io_qpairs_per_ctrlr": 127, 00:06:55.479 "in_capsule_data_size": 4096, 00:06:55.479 "max_io_size": 131072, 00:06:55.479 "io_unit_size": 131072, 00:06:55.479 "max_aq_depth": 128, 00:06:55.479 "num_shared_buffers": 511, 00:06:55.479 "buf_cache_size": 4294967295, 00:06:55.479 "dif_insert_or_strip": false, 00:06:55.479 "zcopy": false, 00:06:55.479 "c2h_success": true, 00:06:55.479 "sock_priority": 0, 00:06:55.479 "abort_timeout_sec": 1, 00:06:55.479 "ack_timeout": 0, 00:06:55.479 "data_wr_pool_size": 0 00:06:55.479 } 00:06:55.479 } 00:06:55.479 ] 00:06:55.479 }, 00:06:55.479 { 00:06:55.479 "subsystem": "iscsi", 00:06:55.479 "config": [ 00:06:55.479 { 00:06:55.479 "method": "iscsi_set_options", 00:06:55.479 "params": { 00:06:55.480 "node_base": "iqn.2016-06.io.spdk", 00:06:55.480 "max_sessions": 128, 00:06:55.480 "max_connections_per_session": 2, 00:06:55.480 "max_queue_depth": 64, 00:06:55.480 "default_time2wait": 2, 00:06:55.480 "default_time2retain": 20, 00:06:55.480 "first_burst_length": 8192, 00:06:55.480 "immediate_data": true, 00:06:55.480 "allow_duplicated_isid": false, 00:06:55.480 "error_recovery_level": 0, 00:06:55.480 "nop_timeout": 60, 00:06:55.480 "nop_in_interval": 30, 00:06:55.480 "disable_chap": false, 00:06:55.480 "require_chap": false, 00:06:55.480 "mutual_chap": false, 00:06:55.480 "chap_group": 0, 00:06:55.480 "max_large_datain_per_connection": 64, 00:06:55.480 "max_r2t_per_connection": 4, 00:06:55.480 "pdu_pool_size": 36864, 00:06:55.480 "immediate_data_pool_size": 16384, 00:06:55.480 "data_out_pool_size": 2048 00:06:55.480 } 00:06:55.480 } 00:06:55.480 ] 00:06:55.480 } 00:06:55.480 ] 00:06:55.480 } 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3405413 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3405413 ']' 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3405413 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3405413 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3405413' 00:06:55.480 killing process with pid 3405413 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3405413 00:06:55.480 11:15:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3405413 00:06:55.740 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3405735 00:06:55.740 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:55.740 11:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3405735 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3405735 ']' 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3405735 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3405735 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3405735' 00:07:01.028 killing process with pid 3405735 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3405735 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3405735 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:01.028 00:07:01.028 real 0m6.039s 00:07:01.028 user 0m5.849s 00:07:01.028 sys 0m0.472s 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:01.028 ************************************ 00:07:01.028 END TEST skip_rpc_with_json 00:07:01.028 ************************************ 00:07:01.028 11:15:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:01.028 11:15:29 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:01.028 11:15:29 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.028 11:15:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.028 ************************************ 00:07:01.028 START TEST skip_rpc_with_delay 00:07:01.028 ************************************ 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.028 [2024-06-10 11:15:29.914849] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:01.028 [2024-06-10 11:15:29.914926] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:01.028 00:07:01.028 real 0m0.076s 00:07:01.028 user 0m0.050s 00:07:01.028 sys 0m0.025s 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.028 11:15:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:01.028 ************************************ 00:07:01.028 END TEST skip_rpc_with_delay 00:07:01.028 ************************************ 00:07:01.029 11:15:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:01.029 11:15:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:01.029 11:15:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:01.029 11:15:29 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:01.029 11:15:29 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.029 11:15:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.029 ************************************ 00:07:01.029 START TEST exit_on_failed_rpc_init 00:07:01.029 ************************************ 00:07:01.029 11:15:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:07:01.029 11:15:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3406802 00:07:01.290 11:15:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3406802 00:07:01.290 11:15:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 3406802 ']' 00:07:01.290 11:15:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.290 11:15:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:01.290 11:15:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.290 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:01.290 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:01.290 11:15:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.290 [2024-06-10 11:15:30.052417] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:01.290 [2024-06-10 11:15:30.052465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406802 ] 00:07:01.290 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.290 [2024-06-10 11:15:30.112145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.290 [2024-06-10 11:15:30.177977] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:01.860 11:15:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:02.120 [2024-06-10 11:15:30.866636] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:02.120 [2024-06-10 11:15:30.866705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406990 ] 00:07:02.121 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.121 [2024-06-10 11:15:30.942931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.121 [2024-06-10 11:15:31.006966] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.121 [2024-06-10 11:15:31.007026] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:02.121 [2024-06-10 11:15:31.007035] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:02.121 [2024-06-10 11:15:31.007042] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3406802 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 3406802 ']' 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 3406802 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:02.121 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3406802 00:07:02.381 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:02.381 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:02.381 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3406802' 00:07:02.381 killing process with pid 3406802 00:07:02.381 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 3406802 00:07:02.381 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 3406802 00:07:02.381 00:07:02.381 real 0m1.325s 00:07:02.381 user 0m1.559s 00:07:02.381 sys 0m0.356s 00:07:02.381 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.381 11:15:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.381 ************************************ 00:07:02.381 END TEST exit_on_failed_rpc_init 00:07:02.381 ************************************ 00:07:02.642 11:15:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:02.642 00:07:02.642 real 0m13.099s 00:07:02.642 user 0m12.683s 00:07:02.642 sys 0m1.346s 00:07:02.642 11:15:31 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.642 11:15:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.642 ************************************ 00:07:02.642 END TEST skip_rpc 00:07:02.642 ************************************ 00:07:02.642 11:15:31 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:02.642 11:15:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.642 11:15:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.642 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.642 ************************************ 00:07:02.642 START TEST rpc_client 00:07:02.642 ************************************ 00:07:02.642 11:15:31 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:02.642 * Looking for test storage... 00:07:02.642 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:07:02.642 11:15:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:02.642 OK 00:07:02.642 11:15:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:02.642 00:07:02.642 real 0m0.125s 00:07:02.642 user 0m0.064s 00:07:02.642 sys 0m0.069s 00:07:02.642 11:15:31 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.642 11:15:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:02.642 ************************************ 00:07:02.642 END TEST rpc_client 00:07:02.642 ************************************ 00:07:02.642 11:15:31 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:02.642 11:15:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.642 11:15:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.642 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.904 ************************************ 00:07:02.904 START TEST json_config 00:07:02.904 ************************************ 00:07:02.904 11:15:31 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:02.904 11:15:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.904 11:15:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.904 11:15:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.904 11:15:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.904 11:15:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.904 11:15:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.904 11:15:31 json_config -- paths/export.sh@5 -- # export PATH 00:07:02.904 11:15:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@47 -- # : 0 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.904 11:15:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:07:02.904 INFO: JSON configuration test init 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:07:02.904 11:15:31 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:02.904 11:15:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.904 11:15:31 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:07:02.904 11:15:31 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:02.905 11:15:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.905 11:15:31 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:07:02.905 11:15:31 json_config -- json_config/common.sh@9 -- # local app=target 00:07:02.905 11:15:31 json_config -- json_config/common.sh@10 -- # shift 00:07:02.905 11:15:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:02.905 11:15:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:02.905 11:15:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:02.905 11:15:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.905 11:15:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.905 11:15:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3407256 00:07:02.905 11:15:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:02.905 Waiting for target to run... 00:07:02.905 11:15:31 json_config -- json_config/common.sh@25 -- # waitforlisten 3407256 /var/tmp/spdk_tgt.sock 00:07:02.905 11:15:31 json_config -- common/autotest_common.sh@830 -- # '[' -z 3407256 ']' 00:07:02.905 11:15:31 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:02.905 11:15:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:02.905 11:15:31 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:02.905 11:15:31 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:02.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:02.905 11:15:31 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:02.905 11:15:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.905 [2024-06-10 11:15:31.802526] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:02.905 [2024-06-10 11:15:31.802596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407256 ] 00:07:02.905 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.166 [2024-06-10 11:15:32.115741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.426 [2024-06-10 11:15:32.167483] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.687 11:15:32 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:03.687 11:15:32 json_config -- common/autotest_common.sh@863 -- # return 0 00:07:03.687 11:15:32 json_config -- json_config/common.sh@26 -- # echo '' 00:07:03.687 00:07:03.687 11:15:32 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:07:03.687 11:15:32 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:07:03.687 11:15:32 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:03.687 11:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.687 11:15:32 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:07:03.687 11:15:32 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:07:03.687 11:15:32 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:03.687 11:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.687 11:15:32 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:03.687 11:15:32 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:07:03.687 11:15:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:04.259 11:15:33 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:07:04.259 11:15:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:04.259 11:15:33 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:04.259 11:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.259 11:15:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:04.259 11:15:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:04.259 11:15:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:04.259 11:15:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:04.259 11:15:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:04.259 11:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:07:04.520 11:15:33 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:04.520 11:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@55 -- # return 0 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:07:04.520 11:15:33 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:04.520 11:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:07:04.520 11:15:33 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.520 11:15:33 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:07:04.520 11:15:33 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:04.520 11:15:33 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:07:04.520 11:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@296 -- # e810=() 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@297 -- # x722=() 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@298 -- # mlx=() 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:11.102 11:15:39 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:11.102 11:15:40 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:11.102 11:15:40 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:07:11.102 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:07:11.102 11:15:40 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:07:11.103 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:07:11.103 Found net devices under 0000:98:00.0: mlx_0_0 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:07:11.103 Found net devices under 0000:98:00.1: mlx_0_1 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@58 -- # uname 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:11.103 11:15:40 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:11.364 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:11.364 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:07:11.364 altname enp152s0f0np0 00:07:11.364 altname ens817f0np0 00:07:11.364 inet 192.168.100.8/24 scope global mlx_0_0 00:07:11.364 valid_lft forever preferred_lft forever 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:11.364 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:11.364 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:07:11.364 altname enp152s0f1np1 00:07:11.364 altname ens817f1np1 00:07:11.364 inet 192.168.100.9/24 scope global mlx_0_1 00:07:11.364 valid_lft forever preferred_lft forever 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@422 -- # return 0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:11.364 192.168.100.9' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:11.364 192.168.100.9' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@457 -- # head -n 1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:11.364 192.168.100.9' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@458 -- # head -n 1 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:11.364 11:15:40 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:11.364 11:15:40 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:07:11.364 11:15:40 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:11.364 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:11.625 MallocForNvmf0 00:07:11.625 11:15:40 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:11.625 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:11.625 MallocForNvmf1 00:07:11.625 11:15:40 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:07:11.625 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:07:11.886 [2024-06-10 11:15:40.691477] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:11.886 [2024-06-10 11:15:40.726113] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ed2940/0x1edf0b0) succeed. 00:07:11.886 [2024-06-10 11:15:40.740368] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed4b30/0x1f5f180) succeed. 00:07:11.886 11:15:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:11.886 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:12.147 11:15:40 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:12.147 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:12.408 11:15:41 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:12.408 11:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:12.408 11:15:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:12.408 11:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:12.668 [2024-06-10 11:15:41.435790] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:12.669 11:15:41 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:07:12.669 11:15:41 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:12.669 11:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.669 11:15:41 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:07:12.669 11:15:41 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:12.669 11:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.669 11:15:41 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:07:12.669 11:15:41 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:12.669 11:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:12.929 MallocBdevForConfigChangeCheck 00:07:12.930 11:15:41 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:07:12.930 11:15:41 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:12.930 11:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.930 11:15:41 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:07:12.930 11:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:13.190 11:15:42 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:07:13.190 INFO: shutting down applications... 00:07:13.190 11:15:42 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:07:13.190 11:15:42 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:07:13.190 11:15:42 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:07:13.190 11:15:42 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:13.760 Calling clear_iscsi_subsystem 00:07:13.760 Calling clear_nvmf_subsystem 00:07:13.760 Calling clear_nbd_subsystem 00:07:13.760 Calling clear_ublk_subsystem 00:07:13.760 Calling clear_vhost_blk_subsystem 00:07:13.760 Calling clear_vhost_scsi_subsystem 00:07:13.760 Calling clear_bdev_subsystem 00:07:13.760 11:15:42 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:07:13.760 11:15:42 json_config -- json_config/json_config.sh@343 -- # count=100 00:07:13.760 11:15:42 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:07:13.760 11:15:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:13.760 11:15:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:13.760 11:15:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:14.023 11:15:42 json_config -- json_config/json_config.sh@345 -- # break 00:07:14.023 11:15:42 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:07:14.023 11:15:42 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:07:14.023 11:15:42 json_config -- json_config/common.sh@31 -- # local app=target 00:07:14.023 11:15:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:14.023 11:15:42 json_config -- json_config/common.sh@35 -- # [[ -n 3407256 ]] 00:07:14.023 11:15:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3407256 00:07:14.023 11:15:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:14.023 11:15:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:14.023 11:15:42 json_config -- json_config/common.sh@41 -- # kill -0 3407256 00:07:14.023 11:15:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:14.595 11:15:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:14.595 11:15:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:14.595 11:15:43 json_config -- json_config/common.sh@41 -- # kill -0 3407256 00:07:14.595 11:15:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:14.595 11:15:43 json_config -- json_config/common.sh@43 -- # break 00:07:14.595 11:15:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:14.595 11:15:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:14.595 SPDK target shutdown done 00:07:14.595 11:15:43 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:07:14.595 INFO: relaunching applications... 00:07:14.595 11:15:43 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:14.595 11:15:43 json_config -- json_config/common.sh@9 -- # local app=target 00:07:14.595 11:15:43 json_config -- json_config/common.sh@10 -- # shift 00:07:14.595 11:15:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:14.595 11:15:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:14.595 11:15:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:14.595 11:15:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:14.595 11:15:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:14.595 11:15:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3412017 00:07:14.595 11:15:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:14.595 Waiting for target to run... 00:07:14.595 11:15:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3412017 /var/tmp/spdk_tgt.sock 00:07:14.595 11:15:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:14.595 11:15:43 json_config -- common/autotest_common.sh@830 -- # '[' -z 3412017 ']' 00:07:14.595 11:15:43 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:14.595 11:15:43 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:14.595 11:15:43 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:14.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:14.595 11:15:43 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:14.595 11:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.595 [2024-06-10 11:15:43.382210] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:14.595 [2024-06-10 11:15:43.382279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412017 ] 00:07:14.595 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.856 [2024-06-10 11:15:43.767164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.856 [2024-06-10 11:15:43.818523] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.428 [2024-06-10 11:15:44.345211] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x152dc20/0x1393840) succeed. 00:07:15.428 [2024-06-10 11:15:44.359005] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x152ddf0/0x14138c0) succeed. 00:07:15.688 [2024-06-10 11:15:44.415500] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:15.688 11:15:44 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:15.688 11:15:44 json_config -- common/autotest_common.sh@863 -- # return 0 00:07:15.688 11:15:44 json_config -- json_config/common.sh@26 -- # echo '' 00:07:15.688 00:07:15.688 11:15:44 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:07:15.688 11:15:44 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:15.688 INFO: Checking if target configuration is the same... 00:07:15.688 11:15:44 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:15.688 11:15:44 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:07:15.688 11:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:15.688 + '[' 2 -ne 2 ']' 00:07:15.688 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:15.688 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:15.688 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:15.688 +++ basename /dev/fd/62 00:07:15.688 ++ mktemp /tmp/62.XXX 00:07:15.688 + tmp_file_1=/tmp/62.ECo 00:07:15.688 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:15.688 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:15.688 + tmp_file_2=/tmp/spdk_tgt_config.json.hnA 00:07:15.688 + ret=0 00:07:15.688 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:15.951 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:15.951 + diff -u /tmp/62.ECo /tmp/spdk_tgt_config.json.hnA 00:07:15.951 + echo 'INFO: JSON config files are the same' 00:07:15.951 INFO: JSON config files are the same 00:07:15.951 + rm /tmp/62.ECo /tmp/spdk_tgt_config.json.hnA 00:07:15.951 + exit 0 00:07:15.951 11:15:44 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:07:15.951 11:15:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:15.951 INFO: changing configuration and checking if this can be detected... 00:07:15.951 11:15:44 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:15.951 11:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:16.253 11:15:44 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:16.253 11:15:44 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:07:16.253 11:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:16.253 + '[' 2 -ne 2 ']' 00:07:16.253 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:16.253 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:16.253 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:16.253 +++ basename /dev/fd/62 00:07:16.253 ++ mktemp /tmp/62.XXX 00:07:16.253 + tmp_file_1=/tmp/62.9J4 00:07:16.253 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:16.253 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:16.253 + tmp_file_2=/tmp/spdk_tgt_config.json.6hT 00:07:16.253 + ret=0 00:07:16.253 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:16.514 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:16.514 + diff -u /tmp/62.9J4 /tmp/spdk_tgt_config.json.6hT 00:07:16.514 + ret=1 00:07:16.514 + echo '=== Start of file: /tmp/62.9J4 ===' 00:07:16.514 + cat /tmp/62.9J4 00:07:16.514 + echo '=== End of file: /tmp/62.9J4 ===' 00:07:16.514 + echo '' 00:07:16.514 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6hT ===' 00:07:16.514 + cat /tmp/spdk_tgt_config.json.6hT 00:07:16.514 + echo '=== End of file: /tmp/spdk_tgt_config.json.6hT ===' 00:07:16.514 + echo '' 00:07:16.514 + rm /tmp/62.9J4 /tmp/spdk_tgt_config.json.6hT 00:07:16.514 + exit 1 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:07:16.514 INFO: configuration change detected. 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@317 -- # [[ -n 3412017 ]] 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@193 -- # uname -s 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.514 11:15:45 json_config -- json_config/json_config.sh@323 -- # killprocess 3412017 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@949 -- # '[' -z 3412017 ']' 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@953 -- # kill -0 3412017 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@954 -- # uname 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3412017 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3412017' 00:07:16.514 killing process with pid 3412017 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@968 -- # kill 3412017 00:07:16.514 11:15:45 json_config -- common/autotest_common.sh@973 -- # wait 3412017 00:07:17.086 11:15:45 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:17.086 11:15:45 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:07:17.086 11:15:45 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:17.086 11:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.086 11:15:45 json_config -- json_config/json_config.sh@328 -- # return 0 00:07:17.086 11:15:45 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:07:17.086 INFO: Success 00:07:17.086 11:15:45 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:07:17.086 11:15:45 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:17.086 11:15:45 json_config -- nvmf/common.sh@117 -- # sync 00:07:17.086 11:15:45 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:07:17.086 11:15:45 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:07:17.086 11:15:45 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:17.086 11:15:45 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:17.086 11:15:45 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:07:17.086 00:07:17.086 real 0m14.179s 00:07:17.086 user 0m17.619s 00:07:17.086 sys 0m6.792s 00:07:17.086 11:15:45 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:17.086 11:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.086 ************************************ 00:07:17.086 END TEST json_config 00:07:17.086 ************************************ 00:07:17.086 11:15:45 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:17.086 11:15:45 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:17.086 11:15:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:17.086 11:15:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.086 ************************************ 00:07:17.086 START TEST json_config_extra_key 00:07:17.086 ************************************ 00:07:17.086 11:15:45 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:17.086 11:15:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.086 11:15:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.086 11:15:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.086 11:15:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.086 11:15:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.086 11:15:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.086 11:15:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:17.086 11:15:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.086 11:15:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:17.086 INFO: launching applications... 00:07:17.086 11:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3412724 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:17.087 Waiting for target to run... 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3412724 /var/tmp/spdk_tgt.sock 00:07:17.087 11:15:45 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 3412724 ']' 00:07:17.087 11:15:45 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:17.087 11:15:45 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:17.087 11:15:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:17.087 11:15:45 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:17.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:17.087 11:15:45 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:17.087 11:15:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:17.087 [2024-06-10 11:15:46.037014] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:17.087 [2024-06-10 11:15:46.037092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412724 ] 00:07:17.348 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.348 [2024-06-10 11:15:46.288047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.609 [2024-06-10 11:15:46.337880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.869 11:15:46 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:17.869 11:15:46 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:17.869 00:07:17.869 11:15:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:17.869 INFO: shutting down applications... 00:07:17.869 11:15:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3412724 ]] 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3412724 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3412724 00:07:17.869 11:15:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:18.440 11:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:18.440 11:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:18.440 11:15:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3412724 00:07:18.440 11:15:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:18.440 11:15:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:18.440 11:15:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:18.440 11:15:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:18.440 SPDK target shutdown done 00:07:18.440 11:15:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:18.440 Success 00:07:18.440 00:07:18.440 real 0m1.429s 00:07:18.440 user 0m1.087s 00:07:18.440 sys 0m0.355s 00:07:18.440 11:15:47 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:18.440 11:15:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:18.440 ************************************ 00:07:18.440 END TEST json_config_extra_key 00:07:18.440 ************************************ 00:07:18.440 11:15:47 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:18.440 11:15:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:18.440 11:15:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.440 11:15:47 -- common/autotest_common.sh@10 -- # set +x 00:07:18.440 ************************************ 00:07:18.440 START TEST alias_rpc 00:07:18.440 ************************************ 00:07:18.440 11:15:47 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:18.701 * Looking for test storage... 00:07:18.701 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:07:18.701 11:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:18.701 11:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3413026 00:07:18.701 11:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3413026 00:07:18.701 11:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.701 11:15:47 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 3413026 ']' 00:07:18.701 11:15:47 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.701 11:15:47 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:18.701 11:15:47 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.701 11:15:47 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:18.701 11:15:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.701 [2024-06-10 11:15:47.537610] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:18.701 [2024-06-10 11:15:47.537683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413026 ] 00:07:18.701 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.701 [2024-06-10 11:15:47.602264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.962 [2024-06-10 11:15:47.677638] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.532 11:15:48 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:19.532 11:15:48 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:19.532 11:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:19.532 11:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3413026 00:07:19.532 11:15:48 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 3413026 ']' 00:07:19.532 11:15:48 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 3413026 00:07:19.532 11:15:48 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:07:19.532 11:15:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:19.532 11:15:48 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3413026 00:07:19.792 11:15:48 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:19.792 11:15:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:19.792 11:15:48 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3413026' 00:07:19.792 killing process with pid 3413026 00:07:19.792 11:15:48 alias_rpc -- common/autotest_common.sh@968 -- # kill 3413026 00:07:19.792 11:15:48 alias_rpc -- common/autotest_common.sh@973 -- # wait 3413026 00:07:19.792 00:07:19.792 real 0m1.366s 00:07:19.792 user 0m1.480s 00:07:19.792 sys 0m0.381s 00:07:19.792 11:15:48 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:19.792 11:15:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.792 ************************************ 00:07:19.792 END TEST alias_rpc 00:07:19.792 ************************************ 00:07:20.052 11:15:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:20.052 11:15:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:20.052 11:15:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:20.052 11:15:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:20.052 11:15:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 ************************************ 00:07:20.052 START TEST spdkcli_tcp 00:07:20.052 ************************************ 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:20.052 * Looking for test storage... 00:07:20.052 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3413271 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3413271 00:07:20.052 11:15:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 3413271 ']' 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:20.052 11:15:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 [2024-06-10 11:15:48.975394] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:20.052 [2024-06-10 11:15:48.975446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413271 ] 00:07:20.052 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.312 [2024-06-10 11:15:49.035352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:20.312 [2024-06-10 11:15:49.100871] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.312 [2024-06-10 11:15:49.100882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.883 11:15:49 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:20.883 11:15:49 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:07:20.883 11:15:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3413581 00:07:20.883 11:15:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:20.883 11:15:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:21.143 [ 00:07:21.143 "bdev_malloc_delete", 00:07:21.143 "bdev_malloc_create", 00:07:21.143 "bdev_null_resize", 00:07:21.143 "bdev_null_delete", 00:07:21.144 "bdev_null_create", 00:07:21.144 "bdev_nvme_cuse_unregister", 00:07:21.144 "bdev_nvme_cuse_register", 00:07:21.144 "bdev_opal_new_user", 00:07:21.144 "bdev_opal_set_lock_state", 00:07:21.144 "bdev_opal_delete", 00:07:21.144 "bdev_opal_get_info", 00:07:21.144 "bdev_opal_create", 00:07:21.144 "bdev_nvme_opal_revert", 00:07:21.144 "bdev_nvme_opal_init", 00:07:21.144 "bdev_nvme_send_cmd", 00:07:21.144 "bdev_nvme_get_path_iostat", 00:07:21.144 "bdev_nvme_get_mdns_discovery_info", 00:07:21.144 "bdev_nvme_stop_mdns_discovery", 00:07:21.144 "bdev_nvme_start_mdns_discovery", 00:07:21.144 "bdev_nvme_set_multipath_policy", 00:07:21.144 "bdev_nvme_set_preferred_path", 00:07:21.144 "bdev_nvme_get_io_paths", 00:07:21.144 "bdev_nvme_remove_error_injection", 00:07:21.144 "bdev_nvme_add_error_injection", 00:07:21.144 "bdev_nvme_get_discovery_info", 00:07:21.144 "bdev_nvme_stop_discovery", 00:07:21.144 "bdev_nvme_start_discovery", 00:07:21.144 "bdev_nvme_get_controller_health_info", 00:07:21.144 "bdev_nvme_disable_controller", 00:07:21.144 "bdev_nvme_enable_controller", 00:07:21.144 "bdev_nvme_reset_controller", 00:07:21.144 "bdev_nvme_get_transport_statistics", 00:07:21.144 "bdev_nvme_apply_firmware", 00:07:21.144 "bdev_nvme_detach_controller", 00:07:21.144 "bdev_nvme_get_controllers", 00:07:21.144 "bdev_nvme_attach_controller", 00:07:21.144 "bdev_nvme_set_hotplug", 00:07:21.144 "bdev_nvme_set_options", 00:07:21.144 "bdev_passthru_delete", 00:07:21.144 "bdev_passthru_create", 00:07:21.144 "bdev_lvol_set_parent_bdev", 00:07:21.144 "bdev_lvol_set_parent", 00:07:21.144 "bdev_lvol_check_shallow_copy", 00:07:21.144 "bdev_lvol_start_shallow_copy", 00:07:21.144 "bdev_lvol_grow_lvstore", 00:07:21.144 "bdev_lvol_get_lvols", 00:07:21.144 "bdev_lvol_get_lvstores", 00:07:21.144 "bdev_lvol_delete", 00:07:21.144 "bdev_lvol_set_read_only", 00:07:21.144 "bdev_lvol_resize", 00:07:21.144 "bdev_lvol_decouple_parent", 00:07:21.144 "bdev_lvol_inflate", 00:07:21.144 "bdev_lvol_rename", 00:07:21.144 "bdev_lvol_clone_bdev", 00:07:21.144 "bdev_lvol_clone", 00:07:21.144 "bdev_lvol_snapshot", 00:07:21.144 "bdev_lvol_create", 00:07:21.144 "bdev_lvol_delete_lvstore", 00:07:21.144 "bdev_lvol_rename_lvstore", 00:07:21.144 "bdev_lvol_create_lvstore", 00:07:21.144 "bdev_raid_set_options", 00:07:21.144 "bdev_raid_remove_base_bdev", 00:07:21.144 "bdev_raid_add_base_bdev", 00:07:21.144 "bdev_raid_delete", 00:07:21.144 "bdev_raid_create", 00:07:21.144 "bdev_raid_get_bdevs", 00:07:21.144 "bdev_error_inject_error", 00:07:21.144 "bdev_error_delete", 00:07:21.144 "bdev_error_create", 00:07:21.144 "bdev_split_delete", 00:07:21.144 "bdev_split_create", 00:07:21.144 "bdev_delay_delete", 00:07:21.144 "bdev_delay_create", 00:07:21.144 "bdev_delay_update_latency", 00:07:21.144 "bdev_zone_block_delete", 00:07:21.144 "bdev_zone_block_create", 00:07:21.144 "blobfs_create", 00:07:21.144 "blobfs_detect", 00:07:21.144 "blobfs_set_cache_size", 00:07:21.144 "bdev_aio_delete", 00:07:21.144 "bdev_aio_rescan", 00:07:21.144 "bdev_aio_create", 00:07:21.144 "bdev_ftl_set_property", 00:07:21.144 "bdev_ftl_get_properties", 00:07:21.144 "bdev_ftl_get_stats", 00:07:21.144 "bdev_ftl_unmap", 00:07:21.144 "bdev_ftl_unload", 00:07:21.144 "bdev_ftl_delete", 00:07:21.144 "bdev_ftl_load", 00:07:21.144 "bdev_ftl_create", 00:07:21.144 "bdev_virtio_attach_controller", 00:07:21.144 "bdev_virtio_scsi_get_devices", 00:07:21.144 "bdev_virtio_detach_controller", 00:07:21.144 "bdev_virtio_blk_set_hotplug", 00:07:21.144 "bdev_iscsi_delete", 00:07:21.144 "bdev_iscsi_create", 00:07:21.144 "bdev_iscsi_set_options", 00:07:21.144 "accel_error_inject_error", 00:07:21.144 "ioat_scan_accel_module", 00:07:21.144 "dsa_scan_accel_module", 00:07:21.144 "iaa_scan_accel_module", 00:07:21.144 "keyring_file_remove_key", 00:07:21.144 "keyring_file_add_key", 00:07:21.144 "keyring_linux_set_options", 00:07:21.144 "iscsi_get_histogram", 00:07:21.144 "iscsi_enable_histogram", 00:07:21.144 "iscsi_set_options", 00:07:21.144 "iscsi_get_auth_groups", 00:07:21.144 "iscsi_auth_group_remove_secret", 00:07:21.144 "iscsi_auth_group_add_secret", 00:07:21.144 "iscsi_delete_auth_group", 00:07:21.144 "iscsi_create_auth_group", 00:07:21.144 "iscsi_set_discovery_auth", 00:07:21.144 "iscsi_get_options", 00:07:21.144 "iscsi_target_node_request_logout", 00:07:21.144 "iscsi_target_node_set_redirect", 00:07:21.144 "iscsi_target_node_set_auth", 00:07:21.144 "iscsi_target_node_add_lun", 00:07:21.144 "iscsi_get_stats", 00:07:21.144 "iscsi_get_connections", 00:07:21.144 "iscsi_portal_group_set_auth", 00:07:21.144 "iscsi_start_portal_group", 00:07:21.144 "iscsi_delete_portal_group", 00:07:21.144 "iscsi_create_portal_group", 00:07:21.144 "iscsi_get_portal_groups", 00:07:21.144 "iscsi_delete_target_node", 00:07:21.144 "iscsi_target_node_remove_pg_ig_maps", 00:07:21.144 "iscsi_target_node_add_pg_ig_maps", 00:07:21.144 "iscsi_create_target_node", 00:07:21.144 "iscsi_get_target_nodes", 00:07:21.144 "iscsi_delete_initiator_group", 00:07:21.144 "iscsi_initiator_group_remove_initiators", 00:07:21.144 "iscsi_initiator_group_add_initiators", 00:07:21.144 "iscsi_create_initiator_group", 00:07:21.144 "iscsi_get_initiator_groups", 00:07:21.144 "nvmf_set_crdt", 00:07:21.144 "nvmf_set_config", 00:07:21.144 "nvmf_set_max_subsystems", 00:07:21.144 "nvmf_stop_mdns_prr", 00:07:21.144 "nvmf_publish_mdns_prr", 00:07:21.144 "nvmf_subsystem_get_listeners", 00:07:21.144 "nvmf_subsystem_get_qpairs", 00:07:21.144 "nvmf_subsystem_get_controllers", 00:07:21.144 "nvmf_get_stats", 00:07:21.144 "nvmf_get_transports", 00:07:21.144 "nvmf_create_transport", 00:07:21.144 "nvmf_get_targets", 00:07:21.144 "nvmf_delete_target", 00:07:21.144 "nvmf_create_target", 00:07:21.144 "nvmf_subsystem_allow_any_host", 00:07:21.144 "nvmf_subsystem_remove_host", 00:07:21.144 "nvmf_subsystem_add_host", 00:07:21.144 "nvmf_ns_remove_host", 00:07:21.144 "nvmf_ns_add_host", 00:07:21.144 "nvmf_subsystem_remove_ns", 00:07:21.144 "nvmf_subsystem_add_ns", 00:07:21.144 "nvmf_subsystem_listener_set_ana_state", 00:07:21.144 "nvmf_discovery_get_referrals", 00:07:21.144 "nvmf_discovery_remove_referral", 00:07:21.144 "nvmf_discovery_add_referral", 00:07:21.144 "nvmf_subsystem_remove_listener", 00:07:21.144 "nvmf_subsystem_add_listener", 00:07:21.144 "nvmf_delete_subsystem", 00:07:21.144 "nvmf_create_subsystem", 00:07:21.144 "nvmf_get_subsystems", 00:07:21.144 "env_dpdk_get_mem_stats", 00:07:21.144 "nbd_get_disks", 00:07:21.144 "nbd_stop_disk", 00:07:21.144 "nbd_start_disk", 00:07:21.144 "ublk_recover_disk", 00:07:21.144 "ublk_get_disks", 00:07:21.144 "ublk_stop_disk", 00:07:21.144 "ublk_start_disk", 00:07:21.144 "ublk_destroy_target", 00:07:21.144 "ublk_create_target", 00:07:21.144 "virtio_blk_create_transport", 00:07:21.144 "virtio_blk_get_transports", 00:07:21.144 "vhost_controller_set_coalescing", 00:07:21.144 "vhost_get_controllers", 00:07:21.144 "vhost_delete_controller", 00:07:21.144 "vhost_create_blk_controller", 00:07:21.144 "vhost_scsi_controller_remove_target", 00:07:21.144 "vhost_scsi_controller_add_target", 00:07:21.144 "vhost_start_scsi_controller", 00:07:21.144 "vhost_create_scsi_controller", 00:07:21.144 "thread_set_cpumask", 00:07:21.144 "framework_get_scheduler", 00:07:21.144 "framework_set_scheduler", 00:07:21.144 "framework_get_reactors", 00:07:21.144 "thread_get_io_channels", 00:07:21.144 "thread_get_pollers", 00:07:21.144 "thread_get_stats", 00:07:21.144 "framework_monitor_context_switch", 00:07:21.144 "spdk_kill_instance", 00:07:21.144 "log_enable_timestamps", 00:07:21.144 "log_get_flags", 00:07:21.144 "log_clear_flag", 00:07:21.144 "log_set_flag", 00:07:21.144 "log_get_level", 00:07:21.144 "log_set_level", 00:07:21.144 "log_get_print_level", 00:07:21.144 "log_set_print_level", 00:07:21.144 "framework_enable_cpumask_locks", 00:07:21.144 "framework_disable_cpumask_locks", 00:07:21.144 "framework_wait_init", 00:07:21.144 "framework_start_init", 00:07:21.144 "scsi_get_devices", 00:07:21.144 "bdev_get_histogram", 00:07:21.144 "bdev_enable_histogram", 00:07:21.144 "bdev_set_qos_limit", 00:07:21.144 "bdev_set_qd_sampling_period", 00:07:21.144 "bdev_get_bdevs", 00:07:21.144 "bdev_reset_iostat", 00:07:21.144 "bdev_get_iostat", 00:07:21.144 "bdev_examine", 00:07:21.145 "bdev_wait_for_examine", 00:07:21.145 "bdev_set_options", 00:07:21.145 "notify_get_notifications", 00:07:21.145 "notify_get_types", 00:07:21.145 "accel_get_stats", 00:07:21.145 "accel_set_options", 00:07:21.145 "accel_set_driver", 00:07:21.145 "accel_crypto_key_destroy", 00:07:21.145 "accel_crypto_keys_get", 00:07:21.145 "accel_crypto_key_create", 00:07:21.145 "accel_assign_opc", 00:07:21.145 "accel_get_module_info", 00:07:21.145 "accel_get_opc_assignments", 00:07:21.145 "vmd_rescan", 00:07:21.145 "vmd_remove_device", 00:07:21.145 "vmd_enable", 00:07:21.145 "sock_get_default_impl", 00:07:21.145 "sock_set_default_impl", 00:07:21.145 "sock_impl_set_options", 00:07:21.145 "sock_impl_get_options", 00:07:21.145 "iobuf_get_stats", 00:07:21.145 "iobuf_set_options", 00:07:21.145 "framework_get_pci_devices", 00:07:21.145 "framework_get_config", 00:07:21.145 "framework_get_subsystems", 00:07:21.145 "trace_get_info", 00:07:21.145 "trace_get_tpoint_group_mask", 00:07:21.145 "trace_disable_tpoint_group", 00:07:21.145 "trace_enable_tpoint_group", 00:07:21.145 "trace_clear_tpoint_mask", 00:07:21.145 "trace_set_tpoint_mask", 00:07:21.145 "keyring_get_keys", 00:07:21.145 "spdk_get_version", 00:07:21.145 "rpc_get_methods" 00:07:21.145 ] 00:07:21.145 11:15:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.145 11:15:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:21.145 11:15:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3413271 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 3413271 ']' 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 3413271 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3413271 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3413271' 00:07:21.145 killing process with pid 3413271 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 3413271 00:07:21.145 11:15:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 3413271 00:07:21.404 00:07:21.404 real 0m1.402s 00:07:21.404 user 0m2.580s 00:07:21.404 sys 0m0.409s 00:07:21.404 11:15:50 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:21.404 11:15:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.404 ************************************ 00:07:21.404 END TEST spdkcli_tcp 00:07:21.404 ************************************ 00:07:21.404 11:15:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:21.405 11:15:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:21.405 11:15:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:21.405 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:07:21.405 ************************************ 00:07:21.405 START TEST dpdk_mem_utility 00:07:21.405 ************************************ 00:07:21.405 11:15:50 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:21.664 * Looking for test storage... 00:07:21.664 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:07:21.664 11:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:21.664 11:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3413657 00:07:21.664 11:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3413657 00:07:21.664 11:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:21.664 11:15:50 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 3413657 ']' 00:07:21.664 11:15:50 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.664 11:15:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:21.664 11:15:50 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.664 11:15:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:21.664 11:15:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:21.664 [2024-06-10 11:15:50.438640] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:21.664 [2024-06-10 11:15:50.438690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413657 ] 00:07:21.664 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.664 [2024-06-10 11:15:50.498651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.664 [2024-06-10 11:15:50.562661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.604 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:22.604 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:07:22.605 11:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:22.605 11:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:22.605 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.605 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:22.605 { 00:07:22.605 "filename": "/tmp/spdk_mem_dump.txt" 00:07:22.605 } 00:07:22.605 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.605 11:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:22.605 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:22.605 1 heaps totaling size 814.000000 MiB 00:07:22.605 size: 814.000000 MiB heap id: 0 00:07:22.605 end heaps---------- 00:07:22.605 8 mempools totaling size 598.116089 MiB 00:07:22.605 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:22.605 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:22.605 size: 84.521057 MiB name: bdev_io_3413657 00:07:22.605 size: 51.011292 MiB name: evtpool_3413657 00:07:22.605 size: 50.003479 MiB name: msgpool_3413657 00:07:22.605 size: 21.763794 MiB name: PDU_Pool 00:07:22.605 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:22.605 size: 0.026123 MiB name: Session_Pool 00:07:22.605 end mempools------- 00:07:22.605 6 memzones totaling size 4.142822 MiB 00:07:22.605 size: 1.000366 MiB name: RG_ring_0_3413657 00:07:22.605 size: 1.000366 MiB name: RG_ring_1_3413657 00:07:22.605 size: 1.000366 MiB name: RG_ring_4_3413657 00:07:22.605 size: 1.000366 MiB name: RG_ring_5_3413657 00:07:22.605 size: 0.125366 MiB name: RG_ring_2_3413657 00:07:22.605 size: 0.015991 MiB name: RG_ring_3_3413657 00:07:22.605 end memzones------- 00:07:22.605 11:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:22.605 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:22.605 list of free elements. size: 12.519348 MiB 00:07:22.605 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:22.605 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:22.605 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:22.605 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:22.605 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:22.605 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:22.605 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:22.605 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:22.605 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:22.605 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:22.605 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:22.605 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:22.605 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:22.605 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:22.605 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:22.605 list of standard malloc elements. size: 199.218079 MiB 00:07:22.605 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:22.605 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:22.605 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:22.605 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:22.605 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:22.605 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:22.605 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:22.605 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:22.605 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:22.605 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:22.605 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:22.605 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:22.605 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:22.605 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:22.605 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:22.605 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:22.605 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:22.605 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:22.605 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:22.605 list of memzone associated elements. size: 602.262573 MiB 00:07:22.605 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:22.605 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:22.605 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:22.605 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:22.605 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:22.605 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3413657_0 00:07:22.605 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:22.605 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3413657_0 00:07:22.605 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:22.605 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3413657_0 00:07:22.605 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:22.605 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:22.605 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:22.605 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:22.605 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:22.605 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3413657 00:07:22.605 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:22.605 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3413657 00:07:22.605 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:22.605 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3413657 00:07:22.605 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:22.605 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:22.605 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:22.605 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:22.605 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:22.605 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:22.605 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:22.605 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:22.605 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:22.605 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3413657 00:07:22.605 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:22.605 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3413657 00:07:22.605 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:22.605 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3413657 00:07:22.605 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:22.605 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3413657 00:07:22.605 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:22.605 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3413657 00:07:22.605 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:22.605 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:22.605 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:22.605 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:22.605 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:22.605 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:22.606 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:22.606 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3413657 00:07:22.606 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:22.606 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:22.606 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:22.606 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:22.606 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:22.606 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3413657 00:07:22.606 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:22.606 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:22.606 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:22.606 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3413657 00:07:22.606 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:22.606 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3413657 00:07:22.606 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:22.606 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:22.606 11:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:22.606 11:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3413657 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 3413657 ']' 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 3413657 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3413657 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3413657' 00:07:22.606 killing process with pid 3413657 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 3413657 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 3413657 00:07:22.606 00:07:22.606 real 0m1.266s 00:07:22.606 user 0m1.355s 00:07:22.606 sys 0m0.338s 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.606 11:15:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:22.606 ************************************ 00:07:22.606 END TEST dpdk_mem_utility 00:07:22.606 ************************************ 00:07:22.867 11:15:51 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:22.867 11:15:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:22.867 11:15:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.867 11:15:51 -- common/autotest_common.sh@10 -- # set +x 00:07:22.867 ************************************ 00:07:22.867 START TEST event 00:07:22.867 ************************************ 00:07:22.867 11:15:51 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:22.867 * Looking for test storage... 00:07:22.867 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:22.867 11:15:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:22.867 11:15:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:22.867 11:15:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:22.867 11:15:51 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:22.867 11:15:51 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.867 11:15:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.867 ************************************ 00:07:22.867 START TEST event_perf 00:07:22.867 ************************************ 00:07:22.867 11:15:51 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:22.867 Running I/O for 1 seconds...[2024-06-10 11:15:51.777517] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:22.867 [2024-06-10 11:15:51.777614] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414043 ] 00:07:22.867 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.128 [2024-06-10 11:15:51.841438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.128 [2024-06-10 11:15:51.909598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.128 [2024-06-10 11:15:51.909713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.128 [2024-06-10 11:15:51.909838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.128 [2024-06-10 11:15:51.909839] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.088 Running I/O for 1 seconds... 00:07:24.088 lcore 0: 181180 00:07:24.088 lcore 1: 181180 00:07:24.088 lcore 2: 181177 00:07:24.088 lcore 3: 181179 00:07:24.088 done. 00:07:24.088 00:07:24.088 real 0m1.206s 00:07:24.088 user 0m4.123s 00:07:24.088 sys 0m0.079s 00:07:24.088 11:15:52 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:24.088 11:15:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.088 ************************************ 00:07:24.088 END TEST event_perf 00:07:24.088 ************************************ 00:07:24.088 11:15:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:24.088 11:15:53 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:24.088 11:15:53 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:24.088 11:15:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.088 ************************************ 00:07:24.088 START TEST event_reactor 00:07:24.088 ************************************ 00:07:24.088 11:15:53 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:24.348 [2024-06-10 11:15:53.062017] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:24.348 [2024-06-10 11:15:53.062114] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414400 ] 00:07:24.348 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.348 [2024-06-10 11:15:53.124725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.348 [2024-06-10 11:15:53.187937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.288 test_start 00:07:25.288 oneshot 00:07:25.288 tick 100 00:07:25.288 tick 100 00:07:25.288 tick 250 00:07:25.288 tick 100 00:07:25.288 tick 100 00:07:25.288 tick 250 00:07:25.288 tick 100 00:07:25.288 tick 500 00:07:25.288 tick 100 00:07:25.288 tick 100 00:07:25.288 tick 250 00:07:25.288 tick 100 00:07:25.288 tick 100 00:07:25.288 test_end 00:07:25.288 00:07:25.288 real 0m1.201s 00:07:25.288 user 0m1.123s 00:07:25.288 sys 0m0.075s 00:07:25.288 11:15:54 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:25.288 11:15:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:25.288 ************************************ 00:07:25.288 END TEST event_reactor 00:07:25.288 ************************************ 00:07:25.550 11:15:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:25.550 11:15:54 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:25.550 11:15:54 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:25.550 11:15:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.550 ************************************ 00:07:25.550 START TEST event_reactor_perf 00:07:25.550 ************************************ 00:07:25.550 11:15:54 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:25.550 [2024-06-10 11:15:54.338113] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:25.550 [2024-06-10 11:15:54.338191] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414677 ] 00:07:25.550 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.550 [2024-06-10 11:15:54.403431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.550 [2024-06-10 11:15:54.473482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.932 test_start 00:07:26.932 test_end 00:07:26.932 Performance: 371948 events per second 00:07:26.932 00:07:26.932 real 0m1.211s 00:07:26.932 user 0m1.135s 00:07:26.932 sys 0m0.073s 00:07:26.933 11:15:55 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.933 11:15:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.933 ************************************ 00:07:26.933 END TEST event_reactor_perf 00:07:26.933 ************************************ 00:07:26.933 11:15:55 event -- event/event.sh@49 -- # uname -s 00:07:26.933 11:15:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:26.933 11:15:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:26.933 11:15:55 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:26.933 11:15:55 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.933 11:15:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.933 ************************************ 00:07:26.933 START TEST event_scheduler 00:07:26.933 ************************************ 00:07:26.933 11:15:55 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:26.933 * Looking for test storage... 00:07:26.933 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:07:26.933 11:15:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:26.933 11:15:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3414900 00:07:26.933 11:15:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:26.933 11:15:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:26.933 11:15:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3414900 00:07:26.933 11:15:55 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 3414900 ']' 00:07:26.933 11:15:55 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.933 11:15:55 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:26.933 11:15:55 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.933 11:15:55 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:26.933 11:15:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:26.933 [2024-06-10 11:15:55.761518] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:26.933 [2024-06-10 11:15:55.761585] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414900 ] 00:07:26.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.933 [2024-06-10 11:15:55.817280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.933 [2024-06-10 11:15:55.883781] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.933 [2024-06-10 11:15:55.883958] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.933 [2024-06-10 11:15:55.884116] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.933 [2024-06-10 11:15:55.884118] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:07:27.875 11:15:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:27.875 POWER: Env isn't set yet! 00:07:27.875 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:27.875 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:27.875 POWER: Cannot set governor of lcore 0 to userspace 00:07:27.875 POWER: Attempting to initialise PSTAT power management... 00:07:27.875 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:07:27.875 POWER: Initialized successfully for lcore 0 power management 00:07:27.875 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:07:27.875 POWER: Initialized successfully for lcore 1 power management 00:07:27.875 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:07:27.875 POWER: Initialized successfully for lcore 2 power management 00:07:27.875 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:07:27.875 POWER: Initialized successfully for lcore 3 power management 00:07:27.875 [2024-06-10 11:15:56.583306] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:27.875 [2024-06-10 11:15:56.583318] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:27.875 [2024-06-10 11:15:56.583324] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.875 11:15:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:27.875 [2024-06-10 11:15:56.640946] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.875 11:15:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:27.875 11:15:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:27.875 ************************************ 00:07:27.875 START TEST scheduler_create_thread 00:07:27.875 ************************************ 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.875 2 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.875 3 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.875 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.876 4 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.876 5 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.876 6 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.876 7 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.876 8 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.876 9 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.876 11:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.261 10 00:07:29.261 11:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:29.261 11:15:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:29.261 11:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:29.261 11:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.203 11:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:30.203 11:15:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:30.203 11:15:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:30.203 11:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:30.203 11:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.773 11:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:30.773 11:15:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:30.773 11:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:30.773 11:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.344 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.344 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:31.344 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:31.344 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.344 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.332 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:32.332 00:07:32.332 real 0m4.267s 00:07:32.332 user 0m0.023s 00:07:32.332 sys 0m0.008s 00:07:32.332 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.332 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.332 ************************************ 00:07:32.332 END TEST scheduler_create_thread 00:07:32.332 ************************************ 00:07:32.332 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:32.332 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3414900 00:07:32.332 11:16:00 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 3414900 ']' 00:07:32.332 11:16:00 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 3414900 00:07:32.332 11:16:00 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:07:32.332 11:16:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:32.332 11:16:00 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3414900 00:07:32.332 11:16:01 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:07:32.332 11:16:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:07:32.332 11:16:01 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3414900' 00:07:32.332 killing process with pid 3414900 00:07:32.332 11:16:01 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 3414900 00:07:32.332 11:16:01 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 3414900 00:07:32.332 [2024-06-10 11:16:01.273254] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:32.592 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:07:32.592 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:07:32.592 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:07:32.592 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:07:32.592 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:07:32.592 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:07:32.592 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:07:32.592 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:07:32.592 00:07:32.592 real 0m5.838s 00:07:32.592 user 0m14.232s 00:07:32.592 sys 0m0.342s 00:07:32.592 11:16:01 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.592 11:16:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:32.592 ************************************ 00:07:32.592 END TEST event_scheduler 00:07:32.592 ************************************ 00:07:32.592 11:16:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:32.592 11:16:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:32.592 11:16:01 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:32.592 11:16:01 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:32.592 11:16:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.592 ************************************ 00:07:32.592 START TEST app_repeat 00:07:32.592 ************************************ 00:07:32.592 11:16:01 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3416200 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3416200' 00:07:32.592 Process app_repeat pid: 3416200 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:32.592 spdk_app_start Round 0 00:07:32.592 11:16:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3416200 /var/tmp/spdk-nbd.sock 00:07:32.592 11:16:01 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3416200 ']' 00:07:32.592 11:16:01 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:32.592 11:16:01 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:32.592 11:16:01 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:32.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:32.592 11:16:01 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:32.592 11:16:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.853 [2024-06-10 11:16:01.566873] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:32.853 [2024-06-10 11:16:01.566935] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416200 ] 00:07:32.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.853 [2024-06-10 11:16:01.628229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.853 [2024-06-10 11:16:01.693828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.853 [2024-06-10 11:16:01.693840] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.422 11:16:02 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:33.422 11:16:02 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:33.422 11:16:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.683 Malloc0 00:07:33.683 11:16:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.683 Malloc1 00:07:33.944 11:16:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:33.944 /dev/nbd0 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:33.944 11:16:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:33.944 11:16:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:33.944 11:16:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:33.944 11:16:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:33.944 11:16:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.945 1+0 records in 00:07:33.945 1+0 records out 00:07:33.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268544 s, 15.3 MB/s 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:33.945 11:16:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:33.945 11:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.945 11:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.945 11:16:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:34.206 /dev/nbd1 00:07:34.206 11:16:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:34.206 11:16:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.206 1+0 records in 00:07:34.206 1+0 records out 00:07:34.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278943 s, 14.7 MB/s 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:34.206 11:16:03 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:34.206 11:16:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.206 11:16:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.206 11:16:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.206 11:16:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.206 11:16:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.468 { 00:07:34.468 "nbd_device": "/dev/nbd0", 00:07:34.468 "bdev_name": "Malloc0" 00:07:34.468 }, 00:07:34.468 { 00:07:34.468 "nbd_device": "/dev/nbd1", 00:07:34.468 "bdev_name": "Malloc1" 00:07:34.468 } 00:07:34.468 ]' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.468 { 00:07:34.468 "nbd_device": "/dev/nbd0", 00:07:34.468 "bdev_name": "Malloc0" 00:07:34.468 }, 00:07:34.468 { 00:07:34.468 "nbd_device": "/dev/nbd1", 00:07:34.468 "bdev_name": "Malloc1" 00:07:34.468 } 00:07:34.468 ]' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.468 /dev/nbd1' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.468 /dev/nbd1' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:34.468 256+0 records in 00:07:34.468 256+0 records out 00:07:34.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124569 s, 84.2 MB/s 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.468 256+0 records in 00:07:34.468 256+0 records out 00:07:34.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162785 s, 64.4 MB/s 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:34.468 256+0 records in 00:07:34.468 256+0 records out 00:07:34.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173314 s, 60.5 MB/s 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.468 11:16:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.729 11:16:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:34.990 11:16:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:34.991 11:16:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:35.252 11:16:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:35.252 [2024-06-10 11:16:04.175025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.513 [2024-06-10 11:16:04.238180] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.513 [2024-06-10 11:16:04.238183] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.513 [2024-06-10 11:16:04.269679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:35.513 [2024-06-10 11:16:04.269714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:38.816 11:16:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:38.816 11:16:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:38.816 spdk_app_start Round 1 00:07:38.816 11:16:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3416200 /var/tmp/spdk-nbd.sock 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3416200 ']' 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:38.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:38.816 11:16:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.816 Malloc0 00:07:38.816 11:16:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.816 Malloc1 00:07:38.816 11:16:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:38.816 /dev/nbd0 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.816 11:16:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.816 11:16:07 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.817 1+0 records in 00:07:38.817 1+0 records out 00:07:38.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274529 s, 14.9 MB/s 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:38.817 11:16:07 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:38.817 11:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.817 11:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.817 11:16:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:39.078 /dev/nbd1 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.078 1+0 records in 00:07:39.078 1+0 records out 00:07:39.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000124606 s, 32.9 MB/s 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:39.078 11:16:07 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:39.078 { 00:07:39.078 "nbd_device": "/dev/nbd0", 00:07:39.078 "bdev_name": "Malloc0" 00:07:39.078 }, 00:07:39.078 { 00:07:39.078 "nbd_device": "/dev/nbd1", 00:07:39.078 "bdev_name": "Malloc1" 00:07:39.078 } 00:07:39.078 ]' 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:39.078 { 00:07:39.078 "nbd_device": "/dev/nbd0", 00:07:39.078 "bdev_name": "Malloc0" 00:07:39.078 }, 00:07:39.078 { 00:07:39.078 "nbd_device": "/dev/nbd1", 00:07:39.078 "bdev_name": "Malloc1" 00:07:39.078 } 00:07:39.078 ]' 00:07:39.078 11:16:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:39.078 /dev/nbd1' 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:39.078 /dev/nbd1' 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:39.078 11:16:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:39.339 256+0 records in 00:07:39.339 256+0 records out 00:07:39.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115917 s, 90.5 MB/s 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:39.339 256+0 records in 00:07:39.339 256+0 records out 00:07:39.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164723 s, 63.7 MB/s 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:39.339 256+0 records in 00:07:39.339 256+0 records out 00:07:39.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172497 s, 60.8 MB/s 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.339 11:16:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:39.600 11:16:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:39.600 11:16:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:39.600 11:16:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:39.600 11:16:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.600 11:16:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.600 11:16:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:39.600 11:16:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.600 11:16:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.601 11:16:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.601 11:16:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.601 11:16:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:39.862 11:16:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:39.862 11:16:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:40.158 11:16:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:40.158 [2024-06-10 11:16:08.973599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.158 [2024-06-10 11:16:09.037112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.158 [2024-06-10 11:16:09.037115] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.158 [2024-06-10 11:16:09.069426] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:40.158 [2024-06-10 11:16:09.069459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:43.475 11:16:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:43.475 11:16:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:43.475 spdk_app_start Round 2 00:07:43.475 11:16:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3416200 /var/tmp/spdk-nbd.sock 00:07:43.475 11:16:11 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3416200 ']' 00:07:43.475 11:16:11 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.475 11:16:11 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:43.475 11:16:11 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.475 11:16:11 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:43.475 11:16:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:43.475 11:16:12 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:43.475 11:16:12 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:43.475 11:16:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:43.475 Malloc0 00:07:43.475 11:16:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:43.475 Malloc1 00:07:43.475 11:16:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.475 11:16:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:43.737 /dev/nbd0 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:43.737 1+0 records in 00:07:43.737 1+0 records out 00:07:43.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283096 s, 14.5 MB/s 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:43.737 /dev/nbd1 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:43.737 1+0 records in 00:07:43.737 1+0 records out 00:07:43.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289493 s, 14.1 MB/s 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:43.737 11:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.737 11:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:43.998 { 00:07:43.998 "nbd_device": "/dev/nbd0", 00:07:43.998 "bdev_name": "Malloc0" 00:07:43.998 }, 00:07:43.998 { 00:07:43.998 "nbd_device": "/dev/nbd1", 00:07:43.998 "bdev_name": "Malloc1" 00:07:43.998 } 00:07:43.998 ]' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:43.998 { 00:07:43.998 "nbd_device": "/dev/nbd0", 00:07:43.998 "bdev_name": "Malloc0" 00:07:43.998 }, 00:07:43.998 { 00:07:43.998 "nbd_device": "/dev/nbd1", 00:07:43.998 "bdev_name": "Malloc1" 00:07:43.998 } 00:07:43.998 ]' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:43.998 /dev/nbd1' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:43.998 /dev/nbd1' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:43.998 256+0 records in 00:07:43.998 256+0 records out 00:07:43.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116826 s, 89.8 MB/s 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:43.998 256+0 records in 00:07:43.998 256+0 records out 00:07:43.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161008 s, 65.1 MB/s 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:43.998 256+0 records in 00:07:43.998 256+0 records out 00:07:43.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171603 s, 61.1 MB/s 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:43.998 11:16:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.258 11:16:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:44.258 11:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:44.258 11:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:44.258 11:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:44.259 11:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.259 11:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.259 11:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:44.259 11:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:44.259 11:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.259 11:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.259 11:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.518 11:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:44.782 11:16:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:44.782 11:16:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:44.782 11:16:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:45.043 [2024-06-10 11:16:13.841602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.043 [2024-06-10 11:16:13.904746] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.043 [2024-06-10 11:16:13.904748] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.043 [2024-06-10 11:16:13.936291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:45.043 [2024-06-10 11:16:13.936326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:48.347 11:16:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3416200 /var/tmp/spdk-nbd.sock 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3416200 ']' 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:48.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:48.347 11:16:16 event.app_repeat -- event/event.sh@39 -- # killprocess 3416200 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 3416200 ']' 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 3416200 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3416200 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3416200' 00:07:48.347 killing process with pid 3416200 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@968 -- # kill 3416200 00:07:48.347 11:16:16 event.app_repeat -- common/autotest_common.sh@973 -- # wait 3416200 00:07:48.347 spdk_app_start is called in Round 0. 00:07:48.347 Shutdown signal received, stop current app iteration 00:07:48.347 Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 reinitialization... 00:07:48.347 spdk_app_start is called in Round 1. 00:07:48.347 Shutdown signal received, stop current app iteration 00:07:48.347 Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 reinitialization... 00:07:48.347 spdk_app_start is called in Round 2. 00:07:48.347 Shutdown signal received, stop current app iteration 00:07:48.347 Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 reinitialization... 00:07:48.347 spdk_app_start is called in Round 3. 00:07:48.347 Shutdown signal received, stop current app iteration 00:07:48.347 11:16:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:48.347 11:16:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:48.347 00:07:48.347 real 0m15.500s 00:07:48.347 user 0m33.437s 00:07:48.347 sys 0m2.027s 00:07:48.347 11:16:17 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:48.347 11:16:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:48.347 ************************************ 00:07:48.347 END TEST app_repeat 00:07:48.347 ************************************ 00:07:48.347 11:16:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:48.347 11:16:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:48.347 11:16:17 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:48.347 11:16:17 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.347 11:16:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.347 ************************************ 00:07:48.347 START TEST cpu_locks 00:07:48.347 ************************************ 00:07:48.347 11:16:17 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:48.347 * Looking for test storage... 00:07:48.347 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:48.347 11:16:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:48.347 11:16:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:48.347 11:16:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:48.347 11:16:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:48.347 11:16:17 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:48.347 11:16:17 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.347 11:16:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.347 ************************************ 00:07:48.347 START TEST default_locks 00:07:48.347 ************************************ 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3419462 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3419462 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3419462 ']' 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:48.347 11:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.347 [2024-06-10 11:16:17.295852] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:48.347 [2024-06-10 11:16:17.295905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419462 ] 00:07:48.609 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.609 [2024-06-10 11:16:17.358043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.609 [2024-06-10 11:16:17.427577] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.180 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:49.180 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:07:49.180 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3419462 00:07:49.180 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3419462 00:07:49.180 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.752 lslocks: write error 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 3419462 ']' 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3419462' 00:07:49.752 killing process with pid 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 3419462 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3419462 ']' 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.752 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3419462) - No such process 00:07:49.752 ERROR: process (pid: 3419462) is no longer running 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:49.752 00:07:49.752 real 0m1.471s 00:07:49.752 user 0m1.582s 00:07:49.752 sys 0m0.480s 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:49.752 11:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.752 ************************************ 00:07:49.752 END TEST default_locks 00:07:49.752 ************************************ 00:07:50.014 11:16:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:50.014 11:16:18 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:50.014 11:16:18 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:50.014 11:16:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.014 ************************************ 00:07:50.014 START TEST default_locks_via_rpc 00:07:50.014 ************************************ 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3419821 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3419821 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3419821 ']' 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:50.014 11:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.014 [2024-06-10 11:16:18.836437] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:50.014 [2024-06-10 11:16:18.836487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419821 ] 00:07:50.014 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.014 [2024-06-10 11:16:18.898462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.014 [2024-06-10 11:16:18.970396] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.997 11:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:50.997 11:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:50.997 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:50.997 11:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.997 11:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.997 11:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3419821 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3419821 00:07:50.998 11:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3419821 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 3419821 ']' 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 3419821 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3419821 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3419821' 00:07:51.258 killing process with pid 3419821 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 3419821 00:07:51.258 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 3419821 00:07:51.518 00:07:51.518 real 0m1.539s 00:07:51.518 user 0m1.640s 00:07:51.518 sys 0m0.509s 00:07:51.518 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:51.518 11:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.518 ************************************ 00:07:51.518 END TEST default_locks_via_rpc 00:07:51.518 ************************************ 00:07:51.518 11:16:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:51.518 11:16:20 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:51.518 11:16:20 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:51.518 11:16:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.518 ************************************ 00:07:51.518 START TEST non_locking_app_on_locked_coremask 00:07:51.518 ************************************ 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3420186 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3420186 /var/tmp/spdk.sock 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3420186 ']' 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:51.518 11:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.518 [2024-06-10 11:16:20.449217] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:51.518 [2024-06-10 11:16:20.449262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420186 ] 00:07:51.518 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.778 [2024-06-10 11:16:20.507869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.778 [2024-06-10 11:16:20.572108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3420483 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3420483 /var/tmp/spdk2.sock 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3420483 ']' 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:52.348 11:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.348 [2024-06-10 11:16:21.266225] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:52.348 [2024-06-10 11:16:21.266279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420483 ] 00:07:52.348 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.609 [2024-06-10 11:16:21.354986] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.609 [2024-06-10 11:16:21.355014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.609 [2024-06-10 11:16:21.484158] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.180 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:53.180 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:53.180 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3420186 00:07:53.180 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3420186 00:07:53.180 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:53.441 lslocks: write error 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3420186 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3420186 ']' 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3420186 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3420186 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3420186' 00:07:53.441 killing process with pid 3420186 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3420186 00:07:53.441 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3420186 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3420483 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3420483 ']' 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3420483 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3420483 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3420483' 00:07:54.011 killing process with pid 3420483 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3420483 00:07:54.011 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3420483 00:07:54.272 00:07:54.272 real 0m2.592s 00:07:54.272 user 0m2.852s 00:07:54.272 sys 0m0.728s 00:07:54.272 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:54.272 11:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.272 ************************************ 00:07:54.272 END TEST non_locking_app_on_locked_coremask 00:07:54.272 ************************************ 00:07:54.272 11:16:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:54.272 11:16:23 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:54.272 11:16:23 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:54.272 11:16:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.272 ************************************ 00:07:54.272 START TEST locking_app_on_unlocked_coremask 00:07:54.272 ************************************ 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3420889 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3420889 /var/tmp/spdk.sock 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3420889 ']' 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:54.272 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.272 [2024-06-10 11:16:23.115474] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:54.272 [2024-06-10 11:16:23.115520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420889 ] 00:07:54.272 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.272 [2024-06-10 11:16:23.174418] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.272 [2024-06-10 11:16:23.174446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.272 [2024-06-10 11:16:23.238776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3420908 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3420908 /var/tmp/spdk2.sock 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3420908 ']' 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:55.213 11:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.213 [2024-06-10 11:16:23.938691] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:55.213 [2024-06-10 11:16:23.938746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420908 ] 00:07:55.213 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.213 [2024-06-10 11:16:24.028565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.213 [2024-06-10 11:16:24.161901] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.784 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:55.784 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:55.784 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3420908 00:07:55.784 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:55.784 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3420908 00:07:56.044 lslocks: write error 00:07:56.044 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3420889 00:07:56.044 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3420889 ']' 00:07:56.044 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3420889 00:07:56.044 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:56.044 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:56.044 11:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3420889 00:07:56.304 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:56.304 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:56.304 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3420889' 00:07:56.304 killing process with pid 3420889 00:07:56.304 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3420889 00:07:56.304 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3420889 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3420908 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3420908 ']' 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3420908 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3420908 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3420908' 00:07:56.565 killing process with pid 3420908 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3420908 00:07:56.565 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3420908 00:07:56.825 00:07:56.825 real 0m2.672s 00:07:56.825 user 0m2.916s 00:07:56.825 sys 0m0.769s 00:07:56.825 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.825 11:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.825 ************************************ 00:07:56.825 END TEST locking_app_on_unlocked_coremask 00:07:56.825 ************************************ 00:07:56.825 11:16:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:56.825 11:16:25 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:56.825 11:16:25 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.826 11:16:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.086 ************************************ 00:07:57.086 START TEST locking_app_on_locked_coremask 00:07:57.086 ************************************ 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3421326 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3421326 /var/tmp/spdk.sock 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3421326 ']' 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:57.086 11:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.086 [2024-06-10 11:16:25.871452] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:57.086 [2024-06-10 11:16:25.871502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421326 ] 00:07:57.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.086 [2024-06-10 11:16:25.932574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.086 [2024-06-10 11:16:25.999311] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3421607 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3421607 /var/tmp/spdk2.sock 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3421607 /var/tmp/spdk2.sock 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3421607 /var/tmp/spdk2.sock 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3421607 ']' 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:58.027 11:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.027 [2024-06-10 11:16:26.681946] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:58.027 [2024-06-10 11:16:26.681998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421607 ] 00:07:58.027 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.027 [2024-06-10 11:16:26.772068] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3421326 has claimed it. 00:07:58.027 [2024-06-10 11:16:26.772110] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:58.598 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3421607) - No such process 00:07:58.598 ERROR: process (pid: 3421607) is no longer running 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3421326 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3421326 00:07:58.598 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.858 lslocks: write error 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3421326 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3421326 ']' 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3421326 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3421326 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3421326' 00:07:58.858 killing process with pid 3421326 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3421326 00:07:58.858 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3421326 00:07:59.119 00:07:59.119 real 0m2.170s 00:07:59.119 user 0m2.427s 00:07:59.119 sys 0m0.584s 00:07:59.119 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:59.119 11:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.119 ************************************ 00:07:59.119 END TEST locking_app_on_locked_coremask 00:07:59.119 ************************************ 00:07:59.119 11:16:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:59.119 11:16:28 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:59.119 11:16:28 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:59.119 11:16:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.119 ************************************ 00:07:59.119 START TEST locking_overlapped_coremask 00:07:59.119 ************************************ 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3421972 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3421972 /var/tmp/spdk.sock 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3421972 ']' 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:59.119 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.379 [2024-06-10 11:16:28.102603] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:07:59.379 [2024-06-10 11:16:28.102652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421972 ] 00:07:59.379 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.379 [2024-06-10 11:16:28.162000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.379 [2024-06-10 11:16:28.227792] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.379 [2024-06-10 11:16:28.227871] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.379 [2024-06-10 11:16:28.228057] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.948 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3421988 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3421988 /var/tmp/spdk2.sock 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3421988 /var/tmp/spdk2.sock 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3421988 /var/tmp/spdk2.sock 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3421988 ']' 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:59.949 11:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.209 [2024-06-10 11:16:28.928974] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:00.209 [2024-06-10 11:16:28.929026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421988 ] 00:08:00.210 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.210 [2024-06-10 11:16:29.001560] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3421972 has claimed it. 00:08:00.210 [2024-06-10 11:16:29.001593] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3421988) - No such process 00:08:00.781 ERROR: process (pid: 3421988) is no longer running 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3421972 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 3421972 ']' 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 3421972 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3421972 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3421972' 00:08:00.781 killing process with pid 3421972 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 3421972 00:08:00.781 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 3421972 00:08:01.043 00:08:01.043 real 0m1.741s 00:08:01.043 user 0m4.940s 00:08:01.043 sys 0m0.350s 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.043 ************************************ 00:08:01.043 END TEST locking_overlapped_coremask 00:08:01.043 ************************************ 00:08:01.043 11:16:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:01.043 11:16:29 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:01.043 11:16:29 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:01.043 11:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.043 ************************************ 00:08:01.043 START TEST locking_overlapped_coremask_via_rpc 00:08:01.043 ************************************ 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3422341 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3422341 /var/tmp/spdk.sock 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3422341 ']' 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:01.043 11:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.043 [2024-06-10 11:16:29.915458] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:01.043 [2024-06-10 11:16:29.915504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422341 ] 00:08:01.043 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.043 [2024-06-10 11:16:29.975209] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.043 [2024-06-10 11:16:29.975238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.304 [2024-06-10 11:16:30.041808] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.304 [2024-06-10 11:16:30.041866] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.304 [2024-06-10 11:16:30.041868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3422364 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3422364 /var/tmp/spdk2.sock 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3422364 ']' 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:01.875 11:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.875 [2024-06-10 11:16:30.738438] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:01.875 [2024-06-10 11:16:30.738489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422364 ] 00:08:01.875 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.875 [2024-06-10 11:16:30.808477] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.875 [2024-06-10 11:16:30.808501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.136 [2024-06-10 11:16:30.922071] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.136 [2024-06-10 11:16:30.922190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.136 [2024-06-10 11:16:30.922193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.707 [2024-06-10 11:16:31.517827] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3422341 has claimed it. 00:08:02.707 request: 00:08:02.707 { 00:08:02.707 "method": "framework_enable_cpumask_locks", 00:08:02.707 "req_id": 1 00:08:02.707 } 00:08:02.707 Got JSON-RPC error response 00:08:02.707 response: 00:08:02.707 { 00:08:02.707 "code": -32603, 00:08:02.707 "message": "Failed to claim CPU core: 2" 00:08:02.707 } 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3422341 /var/tmp/spdk.sock 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3422341 ']' 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:02.707 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3422364 /var/tmp/spdk2.sock 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3422364 ']' 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.967 00:08:02.967 real 0m1.996s 00:08:02.967 user 0m0.769s 00:08:02.967 sys 0m0.151s 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.967 11:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.967 ************************************ 00:08:02.967 END TEST locking_overlapped_coremask_via_rpc 00:08:02.967 ************************************ 00:08:02.967 11:16:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:02.967 11:16:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3422341 ]] 00:08:02.967 11:16:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3422341 00:08:02.967 11:16:31 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3422341 ']' 00:08:02.967 11:16:31 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3422341 00:08:02.967 11:16:31 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:08:02.967 11:16:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:02.967 11:16:31 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3422341 00:08:03.227 11:16:31 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:03.227 11:16:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:03.227 11:16:31 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3422341' 00:08:03.227 killing process with pid 3422341 00:08:03.227 11:16:31 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3422341 00:08:03.227 11:16:31 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3422341 00:08:03.227 11:16:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3422364 ]] 00:08:03.227 11:16:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3422364 00:08:03.227 11:16:32 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3422364 ']' 00:08:03.227 11:16:32 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3422364 00:08:03.227 11:16:32 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:08:03.227 11:16:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:03.227 11:16:32 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3422364 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3422364' 00:08:03.487 killing process with pid 3422364 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3422364 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3422364 00:08:03.487 11:16:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.487 11:16:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:03.487 11:16:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3422341 ]] 00:08:03.487 11:16:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3422341 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3422341 ']' 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3422341 00:08:03.487 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3422341) - No such process 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3422341 is not found' 00:08:03.487 Process with pid 3422341 is not found 00:08:03.487 11:16:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3422364 ]] 00:08:03.487 11:16:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3422364 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3422364 ']' 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3422364 00:08:03.487 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3422364) - No such process 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3422364 is not found' 00:08:03.487 Process with pid 3422364 is not found 00:08:03.487 11:16:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.487 00:08:03.487 real 0m15.318s 00:08:03.487 user 0m26.695s 00:08:03.487 sys 0m4.422s 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:03.487 11:16:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.487 ************************************ 00:08:03.487 END TEST cpu_locks 00:08:03.487 ************************************ 00:08:03.747 00:08:03.747 real 0m40.835s 00:08:03.747 user 1m20.958s 00:08:03.747 sys 0m7.394s 00:08:03.747 11:16:32 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:03.747 11:16:32 event -- common/autotest_common.sh@10 -- # set +x 00:08:03.747 ************************************ 00:08:03.747 END TEST event 00:08:03.747 ************************************ 00:08:03.747 11:16:32 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:03.747 11:16:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:03.747 11:16:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:03.747 11:16:32 -- common/autotest_common.sh@10 -- # set +x 00:08:03.747 ************************************ 00:08:03.747 START TEST thread 00:08:03.747 ************************************ 00:08:03.747 11:16:32 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:03.747 * Looking for test storage... 00:08:03.747 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:08:03.747 11:16:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:03.747 11:16:32 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:08:03.747 11:16:32 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:03.747 11:16:32 thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.747 ************************************ 00:08:03.747 START TEST thread_poller_perf 00:08:03.747 ************************************ 00:08:03.747 11:16:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:03.747 [2024-06-10 11:16:32.690425] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:03.747 [2024-06-10 11:16:32.690514] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422924 ] 00:08:04.007 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.007 [2024-06-10 11:16:32.757992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.007 [2024-06-10 11:16:32.831534] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.007 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:04.946 ====================================== 00:08:04.946 busy:2411505112 (cyc) 00:08:04.946 total_run_count: 288000 00:08:04.946 tsc_hz: 2400000000 (cyc) 00:08:04.946 ====================================== 00:08:04.946 poller_cost: 8373 (cyc), 3488 (nsec) 00:08:04.946 00:08:04.946 real 0m1.225s 00:08:04.946 user 0m1.149s 00:08:04.946 sys 0m0.073s 00:08:04.946 11:16:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:04.946 11:16:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.946 ************************************ 00:08:04.946 END TEST thread_poller_perf 00:08:04.946 ************************************ 00:08:05.206 11:16:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:05.206 11:16:33 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:08:05.206 11:16:33 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:05.206 11:16:33 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.206 ************************************ 00:08:05.206 START TEST thread_poller_perf 00:08:05.206 ************************************ 00:08:05.206 11:16:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:05.206 [2024-06-10 11:16:33.991543] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:05.206 [2024-06-10 11:16:33.991633] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423146 ] 00:08:05.206 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.206 [2024-06-10 11:16:34.054731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.206 [2024-06-10 11:16:34.120287] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.206 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:06.206 ====================================== 00:08:06.206 busy:2402051732 (cyc) 00:08:06.206 total_run_count: 3812000 00:08:06.206 tsc_hz: 2400000000 (cyc) 00:08:06.206 ====================================== 00:08:06.206 poller_cost: 630 (cyc), 262 (nsec) 00:08:06.206 00:08:06.206 real 0m1.205s 00:08:06.206 user 0m1.127s 00:08:06.206 sys 0m0.074s 00:08:06.206 11:16:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:06.206 11:16:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.206 ************************************ 00:08:06.206 END TEST thread_poller_perf 00:08:06.206 ************************************ 00:08:06.468 11:16:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:06.468 00:08:06.468 real 0m2.683s 00:08:06.468 user 0m2.385s 00:08:06.468 sys 0m0.305s 00:08:06.468 11:16:35 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:06.468 11:16:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.468 ************************************ 00:08:06.468 END TEST thread 00:08:06.468 ************************************ 00:08:06.468 11:16:35 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:08:06.468 11:16:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:06.468 11:16:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:06.468 11:16:35 -- common/autotest_common.sh@10 -- # set +x 00:08:06.468 ************************************ 00:08:06.468 START TEST accel 00:08:06.468 ************************************ 00:08:06.468 11:16:35 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:08:06.468 * Looking for test storage... 00:08:06.468 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:06.468 11:16:35 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:06.468 11:16:35 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:06.468 11:16:35 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:06.468 11:16:35 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3423546 00:08:06.468 11:16:35 accel -- accel/accel.sh@63 -- # waitforlisten 3423546 00:08:06.468 11:16:35 accel -- common/autotest_common.sh@830 -- # '[' -z 3423546 ']' 00:08:06.468 11:16:35 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.468 11:16:35 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:06.468 11:16:35 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:06.468 11:16:35 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.468 11:16:35 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:06.468 11:16:35 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:06.468 11:16:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.468 11:16:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.468 11:16:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.468 11:16:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.468 11:16:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.468 11:16:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.468 11:16:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:06.468 11:16:35 accel -- accel/accel.sh@41 -- # jq -r . 00:08:06.468 [2024-06-10 11:16:35.438609] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:06.468 [2024-06-10 11:16:35.438674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423546 ] 00:08:06.728 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.728 [2024-06-10 11:16:35.504535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.728 [2024-06-10 11:16:35.580162] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.299 11:16:36 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:07.299 11:16:36 accel -- common/autotest_common.sh@863 -- # return 0 00:08:07.299 11:16:36 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:07.299 11:16:36 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:07.299 11:16:36 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:07.299 11:16:36 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:07.299 11:16:36 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:07.299 11:16:36 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:07.299 11:16:36 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:07.299 11:16:36 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.299 11:16:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.299 11:16:36 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.299 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.299 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.299 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.299 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.299 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.299 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.299 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.299 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.559 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.559 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.559 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.560 11:16:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.560 11:16:36 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.560 11:16:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.560 11:16:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.560 11:16:36 accel -- accel/accel.sh@75 -- # killprocess 3423546 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@949 -- # '[' -z 3423546 ']' 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@953 -- # kill -0 3423546 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@954 -- # uname 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3423546 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3423546' 00:08:07.560 killing process with pid 3423546 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@968 -- # kill 3423546 00:08:07.560 11:16:36 accel -- common/autotest_common.sh@973 -- # wait 3423546 00:08:07.820 11:16:36 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:07.820 11:16:36 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:07.820 11:16:36 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:07.820 11:16:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.820 11:16:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.820 11:16:36 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:07.820 11:16:36 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:07.820 11:16:36 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.820 11:16:36 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:07.820 11:16:36 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:07.820 11:16:36 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:07.820 11:16:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.820 11:16:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.820 ************************************ 00:08:07.820 START TEST accel_missing_filename 00:08:07.820 ************************************ 00:08:07.820 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:08:07.820 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:08:07.820 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:07.820 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:08:07.820 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.820 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:08:07.820 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.820 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:07.820 11:16:36 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:07.820 [2024-06-10 11:16:36.711916] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:07.820 [2024-06-10 11:16:36.711979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423912 ] 00:08:07.820 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.820 [2024-06-10 11:16:36.773796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.081 [2024-06-10 11:16:36.840084] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.081 [2024-06-10 11:16:36.871853] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.081 [2024-06-10 11:16:36.908675] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:08.081 A filename is required. 00:08:08.081 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:08:08.081 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.081 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:08:08.081 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:08:08.081 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:08:08.081 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.081 00:08:08.081 real 0m0.281s 00:08:08.081 user 0m0.215s 00:08:08.081 sys 0m0.107s 00:08:08.081 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.081 11:16:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:08.081 ************************************ 00:08:08.081 END TEST accel_missing_filename 00:08:08.081 ************************************ 00:08:08.081 11:16:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:08.081 11:16:36 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:08:08.081 11:16:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.081 11:16:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.081 ************************************ 00:08:08.081 START TEST accel_compress_verify 00:08:08.081 ************************************ 00:08:08.081 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:08.081 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:08:08.081 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:08.081 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:08:08.081 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.081 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:08:08.081 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.081 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:08.081 11:16:37 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:08.341 [2024-06-10 11:16:37.065987] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:08.341 [2024-06-10 11:16:37.066063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423937 ] 00:08:08.341 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.341 [2024-06-10 11:16:37.131363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.341 [2024-06-10 11:16:37.203596] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.341 [2024-06-10 11:16:37.235707] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.341 [2024-06-10 11:16:37.272239] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:08.603 00:08:08.603 Compression does not support the verify option, aborting. 00:08:08.603 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:08:08.603 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.603 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:08:08.603 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:08:08.603 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:08:08.603 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.603 00:08:08.603 real 0m0.290s 00:08:08.603 user 0m0.225s 00:08:08.603 sys 0m0.103s 00:08:08.603 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.603 11:16:37 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:08.603 ************************************ 00:08:08.603 END TEST accel_compress_verify 00:08:08.603 ************************************ 00:08:08.603 11:16:37 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:08.603 11:16:37 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:08.603 11:16:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.603 11:16:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.603 ************************************ 00:08:08.603 START TEST accel_wrong_workload 00:08:08.603 ************************************ 00:08:08.603 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:08:08.603 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:08:08.603 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:08.603 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:08:08.603 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.603 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:08:08.603 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.603 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:08:08.603 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:08.604 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:08.604 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.604 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.604 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.604 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.604 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.604 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:08.604 11:16:37 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:08.604 Unsupported workload type: foobar 00:08:08.604 [2024-06-10 11:16:37.413249] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:08.604 accel_perf options: 00:08:08.604 [-h help message] 00:08:08.604 [-q queue depth per core] 00:08:08.604 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:08.604 [-T number of threads per core 00:08:08.604 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:08.604 [-t time in seconds] 00:08:08.604 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:08.604 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:08.604 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:08.604 [-l for compress/decompress workloads, name of uncompressed input file 00:08:08.604 [-S for crc32c workload, use this seed value (default 0) 00:08:08.604 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:08.604 [-f for fill workload, use this BYTE value (default 255) 00:08:08.604 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:08.604 [-y verify result if this switch is on] 00:08:08.604 [-a tasks to allocate per core (default: same value as -q)] 00:08:08.604 Can be used to spread operations across a wider range of memory. 00:08:08.604 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:08:08.604 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.604 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.604 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.604 00:08:08.604 real 0m0.020s 00:08:08.604 user 0m0.012s 00:08:08.604 sys 0m0.008s 00:08:08.604 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.604 11:16:37 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:08.604 ************************************ 00:08:08.604 END TEST accel_wrong_workload 00:08:08.604 ************************************ 00:08:08.604 Error: writing output failed: Broken pipe 00:08:08.604 11:16:37 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:08.604 11:16:37 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:08:08.604 11:16:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.604 11:16:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.604 ************************************ 00:08:08.604 START TEST accel_negative_buffers 00:08:08.604 ************************************ 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:08.604 11:16:37 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:08.604 -x option must be non-negative. 00:08:08.604 [2024-06-10 11:16:37.522844] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:08.604 accel_perf options: 00:08:08.604 [-h help message] 00:08:08.604 [-q queue depth per core] 00:08:08.604 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:08.604 [-T number of threads per core 00:08:08.604 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:08.604 [-t time in seconds] 00:08:08.604 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:08.604 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:08.604 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:08.604 [-l for compress/decompress workloads, name of uncompressed input file 00:08:08.604 [-S for crc32c workload, use this seed value (default 0) 00:08:08.604 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:08.604 [-f for fill workload, use this BYTE value (default 255) 00:08:08.604 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:08.604 [-y verify result if this switch is on] 00:08:08.604 [-a tasks to allocate per core (default: same value as -q)] 00:08:08.604 Can be used to spread operations across a wider range of memory. 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.604 00:08:08.604 real 0m0.036s 00:08:08.604 user 0m0.027s 00:08:08.604 sys 0m0.008s 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.604 11:16:37 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:08.604 ************************************ 00:08:08.604 END TEST accel_negative_buffers 00:08:08.604 ************************************ 00:08:08.604 Error: writing output failed: Broken pipe 00:08:08.604 11:16:37 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:08.604 11:16:37 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:08.604 11:16:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.604 11:16:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.866 ************************************ 00:08:08.866 START TEST accel_crc32c 00:08:08.866 ************************************ 00:08:08.866 11:16:37 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:08.866 [2024-06-10 11:16:37.632691] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:08.866 [2024-06-10 11:16:37.632752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424071 ] 00:08:08.866 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.866 [2024-06-10 11:16:37.694040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.866 [2024-06-10 11:16:37.760056] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.866 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.867 11:16:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.253 11:16:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:10.254 11:16:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.254 00:08:10.254 real 0m1.284s 00:08:10.254 user 0m1.199s 00:08:10.254 sys 0m0.097s 00:08:10.254 11:16:38 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.254 11:16:38 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:10.254 ************************************ 00:08:10.254 END TEST accel_crc32c 00:08:10.254 ************************************ 00:08:10.254 11:16:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:10.254 11:16:38 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:10.254 11:16:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.254 11:16:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.254 ************************************ 00:08:10.254 START TEST accel_crc32c_C2 00:08:10.254 ************************************ 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:10.254 11:16:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:10.254 [2024-06-10 11:16:38.993689] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:10.254 [2024-06-10 11:16:38.993785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424355 ] 00:08:10.254 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.254 [2024-06-10 11:16:39.055003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.254 [2024-06-10 11:16:39.119823] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.254 11:16:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.638 00:08:11.638 real 0m1.283s 00:08:11.638 user 0m1.192s 00:08:11.638 sys 0m0.102s 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:11.638 11:16:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:11.638 ************************************ 00:08:11.638 END TEST accel_crc32c_C2 00:08:11.638 ************************************ 00:08:11.638 11:16:40 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:11.638 11:16:40 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:11.638 11:16:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:11.638 11:16:40 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.638 ************************************ 00:08:11.638 START TEST accel_copy 00:08:11.638 ************************************ 00:08:11.638 11:16:40 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:11.638 [2024-06-10 11:16:40.354511] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:11.638 [2024-06-10 11:16:40.354606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424708 ] 00:08:11.638 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.638 [2024-06-10 11:16:40.419105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.638 [2024-06-10 11:16:40.489596] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.638 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.639 11:16:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:13.021 11:16:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.021 00:08:13.021 real 0m1.295s 00:08:13.021 user 0m1.204s 00:08:13.021 sys 0m0.101s 00:08:13.021 11:16:41 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:13.021 11:16:41 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:13.021 ************************************ 00:08:13.021 END TEST accel_copy 00:08:13.021 ************************************ 00:08:13.021 11:16:41 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.021 11:16:41 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:13.021 11:16:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:13.021 11:16:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.021 ************************************ 00:08:13.021 START TEST accel_fill 00:08:13.021 ************************************ 00:08:13.021 11:16:41 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:13.021 [2024-06-10 11:16:41.722644] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:13.021 [2024-06-10 11:16:41.722735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425057 ] 00:08:13.021 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.021 [2024-06-10 11:16:41.784284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.021 [2024-06-10 11:16:41.849588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.021 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.022 11:16:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:14.415 11:16:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.415 00:08:14.415 real 0m1.285s 00:08:14.415 user 0m1.191s 00:08:14.415 sys 0m0.105s 00:08:14.415 11:16:42 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:14.415 11:16:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:14.415 ************************************ 00:08:14.415 END TEST accel_fill 00:08:14.415 ************************************ 00:08:14.415 11:16:43 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:14.415 11:16:43 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:14.415 11:16:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:14.415 11:16:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.415 ************************************ 00:08:14.415 START TEST accel_copy_crc32c 00:08:14.415 ************************************ 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:14.415 [2024-06-10 11:16:43.082939] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:14.415 [2024-06-10 11:16:43.083004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425383 ] 00:08:14.415 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.415 [2024-06-10 11:16:43.146435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.415 [2024-06-10 11:16:43.216522] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.415 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.416 11:16:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.803 00:08:15.803 real 0m1.291s 00:08:15.803 user 0m1.201s 00:08:15.803 sys 0m0.101s 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:15.803 11:16:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:15.803 ************************************ 00:08:15.803 END TEST accel_copy_crc32c 00:08:15.803 ************************************ 00:08:15.803 11:16:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:15.803 11:16:44 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:15.803 11:16:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:15.803 11:16:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.803 ************************************ 00:08:15.803 START TEST accel_copy_crc32c_C2 00:08:15.803 ************************************ 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:15.803 [2024-06-10 11:16:44.451656] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:15.803 [2024-06-10 11:16:44.451719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425584 ] 00:08:15.803 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.803 [2024-06-10 11:16:44.529213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.803 [2024-06-10 11:16:44.602358] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.803 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.804 11:16:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.188 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.188 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.188 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.188 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.188 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.188 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.188 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.188 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.189 00:08:17.189 real 0m1.309s 00:08:17.189 user 0m1.202s 00:08:17.189 sys 0m0.119s 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:17.189 11:16:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:17.189 ************************************ 00:08:17.189 END TEST accel_copy_crc32c_C2 00:08:17.189 ************************************ 00:08:17.189 11:16:45 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:17.189 11:16:45 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:17.189 11:16:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:17.189 11:16:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.189 ************************************ 00:08:17.189 START TEST accel_dualcast 00:08:17.189 ************************************ 00:08:17.189 11:16:45 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:17.189 [2024-06-10 11:16:45.836038] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:17.189 [2024-06-10 11:16:45.836107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425800 ] 00:08:17.189 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.189 [2024-06-10 11:16:45.897840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.189 [2024-06-10 11:16:45.964697] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.189 11:16:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:18.133 11:16:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.133 00:08:18.133 real 0m1.285s 00:08:18.133 user 0m1.193s 00:08:18.133 sys 0m0.103s 00:08:18.133 11:16:47 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:18.133 11:16:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:18.133 ************************************ 00:08:18.133 END TEST accel_dualcast 00:08:18.133 ************************************ 00:08:18.395 11:16:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:18.395 11:16:47 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:18.395 11:16:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.395 11:16:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.395 ************************************ 00:08:18.395 START TEST accel_compare 00:08:18.395 ************************************ 00:08:18.395 11:16:47 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:18.395 [2024-06-10 11:16:47.197507] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:18.395 [2024-06-10 11:16:47.197605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426145 ] 00:08:18.395 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.395 [2024-06-10 11:16:47.259503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.395 [2024-06-10 11:16:47.325912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.395 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:18.656 11:16:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.599 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:19.600 11:16:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.600 00:08:19.600 real 0m1.287s 00:08:19.600 user 0m1.203s 00:08:19.600 sys 0m0.094s 00:08:19.600 11:16:48 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:19.600 11:16:48 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:19.600 ************************************ 00:08:19.600 END TEST accel_compare 00:08:19.600 ************************************ 00:08:19.600 11:16:48 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:19.600 11:16:48 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:19.600 11:16:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:19.600 11:16:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.600 ************************************ 00:08:19.600 START TEST accel_xor 00:08:19.600 ************************************ 00:08:19.600 11:16:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:19.600 11:16:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:19.600 [2024-06-10 11:16:48.559085] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:19.600 [2024-06-10 11:16:48.559148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426501 ] 00:08:19.861 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.861 [2024-06-10 11:16:48.619375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.861 [2024-06-10 11:16:48.684381] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.861 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.862 11:16:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.246 00:08:21.246 real 0m1.282s 00:08:21.246 user 0m1.195s 00:08:21.246 sys 0m0.098s 00:08:21.246 11:16:49 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:21.246 11:16:49 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:21.246 ************************************ 00:08:21.246 END TEST accel_xor 00:08:21.246 ************************************ 00:08:21.246 11:16:49 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:21.246 11:16:49 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:21.246 11:16:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:21.246 11:16:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.246 ************************************ 00:08:21.246 START TEST accel_xor 00:08:21.246 ************************************ 00:08:21.246 11:16:49 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:21.246 11:16:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:21.246 [2024-06-10 11:16:49.916867] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:21.246 [2024-06-10 11:16:49.916927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426850 ] 00:08:21.246 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.246 [2024-06-10 11:16:49.978224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.246 [2024-06-10 11:16:50.046743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.246 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.247 11:16:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:22.638 11:16:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.638 00:08:22.638 real 0m1.289s 00:08:22.638 user 0m1.202s 00:08:22.638 sys 0m0.098s 00:08:22.638 11:16:51 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:22.638 11:16:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:22.638 ************************************ 00:08:22.638 END TEST accel_xor 00:08:22.638 ************************************ 00:08:22.638 11:16:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:22.638 11:16:51 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:22.638 11:16:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:22.638 11:16:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.638 ************************************ 00:08:22.638 START TEST accel_dif_verify 00:08:22.638 ************************************ 00:08:22.638 11:16:51 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:22.639 [2024-06-10 11:16:51.282452] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:22.639 [2024-06-10 11:16:51.282548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427039 ] 00:08:22.639 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.639 [2024-06-10 11:16:51.347543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.639 [2024-06-10 11:16:51.419831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.639 11:16:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:23.612 11:16:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.612 00:08:23.612 real 0m1.298s 00:08:23.612 user 0m1.206s 00:08:23.612 sys 0m0.105s 00:08:23.612 11:16:52 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:23.612 11:16:52 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:23.612 ************************************ 00:08:23.612 END TEST accel_dif_verify 00:08:23.612 ************************************ 00:08:23.873 11:16:52 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:23.873 11:16:52 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:23.873 11:16:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:23.873 11:16:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.873 ************************************ 00:08:23.873 START TEST accel_dif_generate 00:08:23.873 ************************************ 00:08:23.873 11:16:52 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.873 11:16:52 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:23.874 [2024-06-10 11:16:52.654322] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:23.874 [2024-06-10 11:16:52.654383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427252 ] 00:08:23.874 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.874 [2024-06-10 11:16:52.717807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.874 [2024-06-10 11:16:52.788019] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.874 11:16:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:25.261 11:16:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:25.262 11:16:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.262 11:16:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:25.262 11:16:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.262 00:08:25.262 real 0m1.292s 00:08:25.262 user 0m1.198s 00:08:25.262 sys 0m0.106s 00:08:25.262 11:16:53 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:25.262 11:16:53 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:25.262 ************************************ 00:08:25.262 END TEST accel_dif_generate 00:08:25.262 ************************************ 00:08:25.262 11:16:53 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:25.262 11:16:53 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:25.262 11:16:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:25.262 11:16:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.262 ************************************ 00:08:25.262 START TEST accel_dif_generate_copy 00:08:25.262 ************************************ 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:25.262 11:16:53 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:25.262 [2024-06-10 11:16:54.020109] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:25.262 [2024-06-10 11:16:54.020176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427590 ] 00:08:25.262 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.262 [2024-06-10 11:16:54.081215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.262 [2024-06-10 11:16:54.146581] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:25.262 11:16:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.650 00:08:26.650 real 0m1.284s 00:08:26.650 user 0m1.193s 00:08:26.650 sys 0m0.102s 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:26.650 11:16:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.650 ************************************ 00:08:26.650 END TEST accel_dif_generate_copy 00:08:26.650 ************************************ 00:08:26.650 11:16:55 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:26.650 11:16:55 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.650 11:16:55 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:08:26.650 11:16:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:26.650 11:16:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.650 ************************************ 00:08:26.650 START TEST accel_comp 00:08:26.650 ************************************ 00:08:26.650 11:16:55 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:26.650 [2024-06-10 11:16:55.381628] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:26.650 [2024-06-10 11:16:55.381737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427937 ] 00:08:26.650 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.650 [2024-06-10 11:16:55.451216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.650 [2024-06-10 11:16:55.518105] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.650 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.651 11:16:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.036 11:16:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:28.037 11:16:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.037 00:08:28.037 real 0m1.299s 00:08:28.037 user 0m1.193s 00:08:28.037 sys 0m0.117s 00:08:28.037 11:16:56 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:28.037 11:16:56 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 ************************************ 00:08:28.037 END TEST accel_comp 00:08:28.037 ************************************ 00:08:28.037 11:16:56 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:28.037 11:16:56 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:28.037 11:16:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:28.037 11:16:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 ************************************ 00:08:28.037 START TEST accel_decomp 00:08:28.037 ************************************ 00:08:28.037 11:16:56 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:28.037 [2024-06-10 11:16:56.755700] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:28.037 [2024-06-10 11:16:56.755802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428292 ] 00:08:28.037 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.037 [2024-06-10 11:16:56.816826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.037 [2024-06-10 11:16:56.881718] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.037 11:16:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:29.424 11:16:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.424 00:08:29.424 real 0m1.288s 00:08:29.424 user 0m1.200s 00:08:29.424 sys 0m0.100s 00:08:29.424 11:16:58 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:29.424 11:16:58 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:29.424 ************************************ 00:08:29.424 END TEST accel_decomp 00:08:29.424 ************************************ 00:08:29.424 11:16:58 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:29.424 11:16:58 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:29.424 11:16:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:29.424 11:16:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.424 ************************************ 00:08:29.424 START TEST accel_decomp_full 00:08:29.424 ************************************ 00:08:29.424 11:16:58 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:29.424 11:16:58 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:29.424 [2024-06-10 11:16:58.117847] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:29.424 [2024-06-10 11:16:58.117929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428541 ] 00:08:29.424 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.424 [2024-06-10 11:16:58.182366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.425 [2024-06-10 11:16:58.254787] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.425 11:16:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.809 11:16:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.809 00:08:30.809 real 0m1.312s 00:08:30.809 user 0m1.221s 00:08:30.809 sys 0m0.103s 00:08:30.809 11:16:59 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:30.809 11:16:59 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:30.809 ************************************ 00:08:30.809 END TEST accel_decomp_full 00:08:30.810 ************************************ 00:08:30.810 11:16:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.810 11:16:59 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:30.810 11:16:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:30.810 11:16:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.810 ************************************ 00:08:30.810 START TEST accel_decomp_mcore 00:08:30.810 ************************************ 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:30.810 [2024-06-10 11:16:59.503998] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:30.810 [2024-06-10 11:16:59.504095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428732 ] 00:08:30.810 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.810 [2024-06-10 11:16:59.567194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.810 [2024-06-10 11:16:59.635324] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.810 [2024-06-10 11:16:59.635437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.810 [2024-06-10 11:16:59.635591] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.810 [2024-06-10 11:16:59.635591] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.810 11:16:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.197 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:32.198 00:08:32.198 real 0m1.300s 00:08:32.198 user 0m4.440s 00:08:32.198 sys 0m0.104s 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:32.198 11:17:00 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:32.198 ************************************ 00:08:32.198 END TEST accel_decomp_mcore 00:08:32.198 ************************************ 00:08:32.198 11:17:00 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.198 11:17:00 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:32.198 11:17:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:32.198 11:17:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.198 ************************************ 00:08:32.198 START TEST accel_decomp_full_mcore 00:08:32.198 ************************************ 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:32.198 11:17:00 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:32.198 [2024-06-10 11:17:00.878467] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:32.198 [2024-06-10 11:17:00.878562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429034 ] 00:08:32.198 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.198 [2024-06-10 11:17:00.939934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.198 [2024-06-10 11:17:01.007535] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.198 [2024-06-10 11:17:01.007648] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.198 [2024-06-10 11:17:01.007818] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.198 [2024-06-10 11:17:01.007819] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.198 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.199 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.199 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.199 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.199 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.199 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.199 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.199 11:17:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.583 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.583 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.583 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.583 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.583 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.583 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.583 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.583 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.584 00:08:33.584 real 0m1.313s 00:08:33.584 user 0m4.498s 00:08:33.584 sys 0m0.105s 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:33.584 11:17:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:33.584 ************************************ 00:08:33.584 END TEST accel_decomp_full_mcore 00:08:33.584 ************************************ 00:08:33.584 11:17:02 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:33.584 11:17:02 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:33.584 11:17:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:33.584 11:17:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.584 ************************************ 00:08:33.584 START TEST accel_decomp_mthread 00:08:33.584 ************************************ 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:33.584 [2024-06-10 11:17:02.266450] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:33.584 [2024-06-10 11:17:02.266541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429386 ] 00:08:33.584 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.584 [2024-06-10 11:17:02.327823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.584 [2024-06-10 11:17:02.393076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.584 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.585 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.585 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.585 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.585 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.585 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.585 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.585 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.585 11:17:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.970 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.970 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.970 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.970 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.970 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.970 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.970 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.971 00:08:34.971 real 0m1.292s 00:08:34.971 user 0m1.197s 00:08:34.971 sys 0m0.108s 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:34.971 11:17:03 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:34.971 ************************************ 00:08:34.971 END TEST accel_decomp_mthread 00:08:34.971 ************************************ 00:08:34.971 11:17:03 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:34.971 11:17:03 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:34.971 11:17:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:34.971 11:17:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.971 ************************************ 00:08:34.971 START TEST accel_decomp_full_mthread 00:08:34.971 ************************************ 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:34.971 [2024-06-10 11:17:03.630165] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:34.971 [2024-06-10 11:17:03.630254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429739 ] 00:08:34.971 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.971 [2024-06-10 11:17:03.692328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.971 [2024-06-10 11:17:03.756690] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.971 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.972 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.972 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.972 11:17:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.359 00:08:36.359 real 0m1.316s 00:08:36.359 user 0m1.224s 00:08:36.359 sys 0m0.104s 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:36.359 11:17:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:36.359 ************************************ 00:08:36.359 END TEST accel_decomp_full_mthread 00:08:36.359 ************************************ 00:08:36.359 11:17:04 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:36.359 11:17:04 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:36.359 11:17:04 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:36.359 11:17:04 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:36.359 11:17:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:36.359 11:17:04 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.359 11:17:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.359 11:17:04 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.359 11:17:04 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.359 11:17:04 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.359 11:17:04 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.359 11:17:04 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:36.359 11:17:04 accel -- accel/accel.sh@41 -- # jq -r . 00:08:36.359 ************************************ 00:08:36.359 START TEST accel_dif_functional_tests 00:08:36.359 ************************************ 00:08:36.359 11:17:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:36.359 [2024-06-10 11:17:05.051370] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:36.359 [2024-06-10 11:17:05.051439] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430029 ] 00:08:36.359 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.359 [2024-06-10 11:17:05.116804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.359 [2024-06-10 11:17:05.189484] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.359 [2024-06-10 11:17:05.189597] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.359 [2024-06-10 11:17:05.189600] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.359 00:08:36.359 00:08:36.359 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.359 http://cunit.sourceforge.net/ 00:08:36.359 00:08:36.360 00:08:36.360 Suite: accel_dif 00:08:36.360 Test: verify: DIF generated, GUARD check ...passed 00:08:36.360 Test: verify: DIF generated, APPTAG check ...passed 00:08:36.360 Test: verify: DIF generated, REFTAG check ...passed 00:08:36.360 Test: verify: DIF not generated, GUARD check ...[2024-06-10 11:17:05.244849] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:36.360 passed 00:08:36.360 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 11:17:05.244893] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:36.360 passed 00:08:36.360 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 11:17:05.244916] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:36.360 passed 00:08:36.360 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:36.360 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 11:17:05.244964] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:36.360 passed 00:08:36.360 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:36.360 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:36.360 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:36.360 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 11:17:05.245079] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:36.360 passed 00:08:36.360 Test: verify copy: DIF generated, GUARD check ...passed 00:08:36.360 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:36.360 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:36.360 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 11:17:05.245200] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:36.360 passed 00:08:36.360 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 11:17:05.245223] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:36.360 passed 00:08:36.360 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 11:17:05.245245] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:36.360 passed 00:08:36.360 Test: generate copy: DIF generated, GUARD check ...passed 00:08:36.360 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:36.360 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:36.360 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:36.360 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:36.360 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:36.360 Test: generate copy: iovecs-len validate ...[2024-06-10 11:17:05.245432] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:36.360 passed 00:08:36.360 Test: generate copy: buffer alignment validate ...passed 00:08:36.360 00:08:36.360 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.360 suites 1 1 n/a 0 0 00:08:36.360 tests 26 26 26 0 0 00:08:36.360 asserts 115 115 115 0 n/a 00:08:36.360 00:08:36.360 Elapsed time = 0.002 seconds 00:08:36.621 00:08:36.621 real 0m0.366s 00:08:36.621 user 0m0.445s 00:08:36.621 sys 0m0.140s 00:08:36.621 11:17:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:36.621 11:17:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:36.621 ************************************ 00:08:36.621 END TEST accel_dif_functional_tests 00:08:36.621 ************************************ 00:08:36.621 00:08:36.621 real 0m30.114s 00:08:36.621 user 0m33.727s 00:08:36.621 sys 0m4.144s 00:08:36.621 11:17:05 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:36.621 11:17:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.621 ************************************ 00:08:36.621 END TEST accel 00:08:36.621 ************************************ 00:08:36.621 11:17:05 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:36.621 11:17:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:36.621 11:17:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:36.621 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:08:36.621 ************************************ 00:08:36.621 START TEST accel_rpc 00:08:36.621 ************************************ 00:08:36.621 11:17:05 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:36.621 * Looking for test storage... 00:08:36.621 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:36.621 11:17:05 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:36.621 11:17:05 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3430154 00:08:36.621 11:17:05 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3430154 00:08:36.621 11:17:05 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 3430154 ']' 00:08:36.621 11:17:05 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.621 11:17:05 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:36.621 11:17:05 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.621 11:17:05 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:36.621 11:17:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.621 11:17:05 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:36.883 [2024-06-10 11:17:05.620986] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:36.883 [2024-06-10 11:17:05.621044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430154 ] 00:08:36.883 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.883 [2024-06-10 11:17:05.681722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.883 [2024-06-10 11:17:05.748468] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.458 11:17:06 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:37.458 11:17:06 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:37.458 11:17:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:37.458 11:17:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:37.458 11:17:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:37.458 11:17:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:37.458 11:17:06 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:37.458 11:17:06 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:37.458 11:17:06 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:37.458 11:17:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.458 ************************************ 00:08:37.458 START TEST accel_assign_opcode 00:08:37.458 ************************************ 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.458 [2024-06-10 11:17:06.398349] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.458 [2024-06-10 11:17:06.406362] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.458 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.720 software 00:08:37.720 00:08:37.720 real 0m0.208s 00:08:37.720 user 0m0.045s 00:08:37.720 sys 0m0.012s 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:37.720 11:17:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.720 ************************************ 00:08:37.720 END TEST accel_assign_opcode 00:08:37.720 ************************************ 00:08:37.720 11:17:06 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3430154 00:08:37.720 11:17:06 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 3430154 ']' 00:08:37.720 11:17:06 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 3430154 00:08:37.720 11:17:06 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:08:37.720 11:17:06 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:37.720 11:17:06 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3430154 00:08:37.980 11:17:06 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:37.981 11:17:06 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:37.981 11:17:06 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3430154' 00:08:37.981 killing process with pid 3430154 00:08:37.981 11:17:06 accel_rpc -- common/autotest_common.sh@968 -- # kill 3430154 00:08:37.981 11:17:06 accel_rpc -- common/autotest_common.sh@973 -- # wait 3430154 00:08:37.981 00:08:37.981 real 0m1.427s 00:08:37.981 user 0m1.494s 00:08:37.981 sys 0m0.385s 00:08:37.981 11:17:06 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:37.981 11:17:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.981 ************************************ 00:08:37.981 END TEST accel_rpc 00:08:37.981 ************************************ 00:08:37.981 11:17:06 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.981 11:17:06 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:37.981 11:17:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:37.981 11:17:06 -- common/autotest_common.sh@10 -- # set +x 00:08:38.241 ************************************ 00:08:38.241 START TEST app_cmdline 00:08:38.241 ************************************ 00:08:38.241 11:17:06 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.241 * Looking for test storage... 00:08:38.241 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:38.242 11:17:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:38.242 11:17:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3430564 00:08:38.242 11:17:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3430564 00:08:38.242 11:17:07 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:38.242 11:17:07 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 3430564 ']' 00:08:38.242 11:17:07 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.242 11:17:07 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:38.242 11:17:07 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.242 11:17:07 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:38.242 11:17:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.242 [2024-06-10 11:17:07.115254] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:08:38.242 [2024-06-10 11:17:07.115310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430564 ] 00:08:38.242 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.242 [2024-06-10 11:17:07.178248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.503 [2024-06-10 11:17:07.246435] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.075 11:17:07 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:39.075 11:17:07 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:08:39.075 11:17:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:39.075 { 00:08:39.075 "version": "SPDK v24.09-pre git sha1 ee2eae53a", 00:08:39.075 "fields": { 00:08:39.075 "major": 24, 00:08:39.075 "minor": 9, 00:08:39.075 "patch": 0, 00:08:39.075 "suffix": "-pre", 00:08:39.075 "commit": "ee2eae53a" 00:08:39.075 } 00:08:39.075 } 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.337 request: 00:08:39.337 { 00:08:39.337 "method": "env_dpdk_get_mem_stats", 00:08:39.337 "req_id": 1 00:08:39.337 } 00:08:39.337 Got JSON-RPC error response 00:08:39.337 response: 00:08:39.337 { 00:08:39.337 "code": -32601, 00:08:39.337 "message": "Method not found" 00:08:39.337 } 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:39.337 11:17:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3430564 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 3430564 ']' 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 3430564 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3430564 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3430564' 00:08:39.337 killing process with pid 3430564 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@968 -- # kill 3430564 00:08:39.337 11:17:08 app_cmdline -- common/autotest_common.sh@973 -- # wait 3430564 00:08:39.598 00:08:39.598 real 0m1.504s 00:08:39.598 user 0m1.798s 00:08:39.598 sys 0m0.380s 00:08:39.598 11:17:08 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:39.598 11:17:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.598 ************************************ 00:08:39.599 END TEST app_cmdline 00:08:39.599 ************************************ 00:08:39.599 11:17:08 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:39.599 11:17:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:39.599 11:17:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:39.599 11:17:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.599 ************************************ 00:08:39.599 START TEST version 00:08:39.599 ************************************ 00:08:39.599 11:17:08 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:39.864 * Looking for test storage... 00:08:39.864 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:39.864 11:17:08 version -- app/version.sh@17 -- # get_header_version major 00:08:39.864 11:17:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:39.864 11:17:08 version -- app/version.sh@14 -- # cut -f2 00:08:39.864 11:17:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.864 11:17:08 version -- app/version.sh@17 -- # major=24 00:08:39.864 11:17:08 version -- app/version.sh@18 -- # get_header_version minor 00:08:39.864 11:17:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:39.864 11:17:08 version -- app/version.sh@14 -- # cut -f2 00:08:39.864 11:17:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.864 11:17:08 version -- app/version.sh@18 -- # minor=9 00:08:39.864 11:17:08 version -- app/version.sh@19 -- # get_header_version patch 00:08:39.864 11:17:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:39.864 11:17:08 version -- app/version.sh@14 -- # cut -f2 00:08:39.864 11:17:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.864 11:17:08 version -- app/version.sh@19 -- # patch=0 00:08:39.864 11:17:08 version -- app/version.sh@20 -- # get_header_version suffix 00:08:39.864 11:17:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:39.864 11:17:08 version -- app/version.sh@14 -- # cut -f2 00:08:39.864 11:17:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.864 11:17:08 version -- app/version.sh@20 -- # suffix=-pre 00:08:39.864 11:17:08 version -- app/version.sh@22 -- # version=24.9 00:08:39.864 11:17:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:39.864 11:17:08 version -- app/version.sh@28 -- # version=24.9rc0 00:08:39.864 11:17:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:39.864 11:17:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:39.864 11:17:08 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:39.864 11:17:08 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:39.864 00:08:39.864 real 0m0.169s 00:08:39.864 user 0m0.084s 00:08:39.864 sys 0m0.125s 00:08:39.864 11:17:08 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:39.864 11:17:08 version -- common/autotest_common.sh@10 -- # set +x 00:08:39.864 ************************************ 00:08:39.864 END TEST version 00:08:39.864 ************************************ 00:08:39.864 11:17:08 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:39.864 11:17:08 -- spdk/autotest.sh@198 -- # uname -s 00:08:39.864 11:17:08 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:39.864 11:17:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:39.864 11:17:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:39.864 11:17:08 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:39.864 11:17:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:39.864 11:17:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:39.864 11:17:08 -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:39.864 11:17:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.864 11:17:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:39.864 11:17:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:39.864 11:17:08 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:39.864 11:17:08 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:39.864 11:17:08 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:08:39.864 11:17:08 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:39.864 11:17:08 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:39.864 11:17:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:39.864 11:17:08 -- common/autotest_common.sh@10 -- # set +x 00:08:40.188 ************************************ 00:08:40.188 START TEST nvmf_rdma 00:08:40.188 ************************************ 00:08:40.188 11:17:08 nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:40.188 * Looking for test storage... 00:08:40.188 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:40.188 11:17:08 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.188 11:17:08 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.188 11:17:08 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.188 11:17:08 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.188 11:17:08 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.188 11:17:08 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.188 11:17:08 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:08:40.188 11:17:08 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:40.188 11:17:08 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:40.188 11:17:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:40.188 11:17:08 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:40.188 11:17:08 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:40.188 11:17:08 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:40.188 11:17:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:40.188 ************************************ 00:08:40.188 START TEST nvmf_example 00:08:40.188 ************************************ 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:40.188 * Looking for test storage... 00:08:40.188 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.188 11:17:09 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.189 11:17:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.451 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.451 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.451 11:17:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.451 11:17:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.043 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:47.044 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:47.044 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:47.044 Found net devices under 0000:98:00.0: mlx_0_0 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:47.044 Found net devices under 0000:98:00.1: mlx_0_1 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:47.044 11:17:15 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:47.044 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:47.305 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.305 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:08:47.305 altname enp152s0f0np0 00:08:47.305 altname ens817f0np0 00:08:47.305 inet 192.168.100.8/24 scope global mlx_0_0 00:08:47.305 valid_lft forever preferred_lft forever 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:47.305 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.305 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:08:47.305 altname enp152s0f1np1 00:08:47.305 altname ens817f1np1 00:08:47.305 inet 192.168.100.9/24 scope global mlx_0_1 00:08:47.305 valid_lft forever preferred_lft forever 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.305 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:47.306 192.168.100.9' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:47.306 192.168.100.9' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:47.306 192.168.100.9' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3434712 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3434712 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 3434712 ']' 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:47.306 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.306 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.247 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:48.247 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:08:48.247 11:17:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:48.247 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:48.247 11:17:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.247 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:48.247 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.247 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.507 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.507 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:48.507 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:48.508 11:17:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:48.508 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.747 Initializing NVMe Controllers 00:09:00.747 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.747 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:00.747 Initialization complete. Launching workers. 00:09:00.747 ======================================================== 00:09:00.747 Latency(us) 00:09:00.747 Device Information : IOPS MiB/s Average min max 00:09:00.747 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25725.03 100.49 2487.45 674.59 15029.97 00:09:00.747 ======================================================== 00:09:00.747 Total : 25725.03 100.49 2487.45 674.59 15029.97 00:09:00.747 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:00.747 rmmod nvme_rdma 00:09:00.747 rmmod nvme_fabrics 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3434712 ']' 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3434712 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 3434712 ']' 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 3434712 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3434712 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3434712' 00:09:00.747 killing process with pid 3434712 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@968 -- # kill 3434712 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@973 -- # wait 3434712 00:09:00.747 nvmf threads initialize successfully 00:09:00.747 bdev subsystem init successfully 00:09:00.747 created a nvmf target service 00:09:00.747 create targets's poll groups done 00:09:00.747 all subsystems of target started 00:09:00.747 nvmf target is running 00:09:00.747 all subsystems of target stopped 00:09:00.747 destroy targets's poll groups done 00:09:00.747 destroyed the nvmf target service 00:09:00.747 bdev subsystem finish successfully 00:09:00.747 nvmf threads destroy successfully 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:00.747 00:09:00.747 real 0m19.891s 00:09:00.747 user 0m52.302s 00:09:00.747 sys 0m5.533s 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:00.747 11:17:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:00.747 ************************************ 00:09:00.747 END TEST nvmf_example 00:09:00.747 ************************************ 00:09:00.747 11:17:28 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:00.747 11:17:28 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:00.747 11:17:28 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:00.747 11:17:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:00.747 ************************************ 00:09:00.747 START TEST nvmf_filesystem 00:09:00.747 ************************************ 00:09:00.747 11:17:28 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:00.747 * Looking for test storage... 00:09:00.747 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:00.747 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:09:00.748 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:00.748 #define SPDK_CONFIG_H 00:09:00.748 #define SPDK_CONFIG_APPS 1 00:09:00.748 #define SPDK_CONFIG_ARCH native 00:09:00.748 #undef SPDK_CONFIG_ASAN 00:09:00.748 #undef SPDK_CONFIG_AVAHI 00:09:00.748 #undef SPDK_CONFIG_CET 00:09:00.748 #define SPDK_CONFIG_COVERAGE 1 00:09:00.748 #define SPDK_CONFIG_CROSS_PREFIX 00:09:00.748 #undef SPDK_CONFIG_CRYPTO 00:09:00.748 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:00.748 #undef SPDK_CONFIG_CUSTOMOCF 00:09:00.748 #undef SPDK_CONFIG_DAOS 00:09:00.748 #define SPDK_CONFIG_DAOS_DIR 00:09:00.748 #define SPDK_CONFIG_DEBUG 1 00:09:00.748 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:00.748 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:00.748 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:00.748 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:00.748 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:00.748 #undef SPDK_CONFIG_DPDK_UADK 00:09:00.748 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:00.748 #define SPDK_CONFIG_EXAMPLES 1 00:09:00.748 #undef SPDK_CONFIG_FC 00:09:00.748 #define SPDK_CONFIG_FC_PATH 00:09:00.748 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:00.748 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:00.748 #undef SPDK_CONFIG_FUSE 00:09:00.748 #undef SPDK_CONFIG_FUZZER 00:09:00.748 #define SPDK_CONFIG_FUZZER_LIB 00:09:00.748 #undef SPDK_CONFIG_GOLANG 00:09:00.748 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:00.748 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:00.748 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:00.748 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:00.748 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:00.748 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:00.748 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:00.748 #define SPDK_CONFIG_IDXD 1 00:09:00.748 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:00.748 #undef SPDK_CONFIG_IPSEC_MB 00:09:00.748 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:00.748 #define SPDK_CONFIG_ISAL 1 00:09:00.748 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:00.749 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:00.749 #define SPDK_CONFIG_LIBDIR 00:09:00.749 #undef SPDK_CONFIG_LTO 00:09:00.749 #define SPDK_CONFIG_MAX_LCORES 00:09:00.749 #define SPDK_CONFIG_NVME_CUSE 1 00:09:00.749 #undef SPDK_CONFIG_OCF 00:09:00.749 #define SPDK_CONFIG_OCF_PATH 00:09:00.749 #define SPDK_CONFIG_OPENSSL_PATH 00:09:00.749 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:00.749 #define SPDK_CONFIG_PGO_DIR 00:09:00.749 #undef SPDK_CONFIG_PGO_USE 00:09:00.749 #define SPDK_CONFIG_PREFIX /usr/local 00:09:00.749 #undef SPDK_CONFIG_RAID5F 00:09:00.749 #undef SPDK_CONFIG_RBD 00:09:00.749 #define SPDK_CONFIG_RDMA 1 00:09:00.749 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:00.749 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:00.749 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:00.749 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:00.749 #define SPDK_CONFIG_SHARED 1 00:09:00.749 #undef SPDK_CONFIG_SMA 00:09:00.749 #define SPDK_CONFIG_TESTS 1 00:09:00.749 #undef SPDK_CONFIG_TSAN 00:09:00.749 #define SPDK_CONFIG_UBLK 1 00:09:00.749 #define SPDK_CONFIG_UBSAN 1 00:09:00.749 #undef SPDK_CONFIG_UNIT_TESTS 00:09:00.749 #undef SPDK_CONFIG_URING 00:09:00.749 #define SPDK_CONFIG_URING_PATH 00:09:00.749 #undef SPDK_CONFIG_URING_ZNS 00:09:00.749 #undef SPDK_CONFIG_USDT 00:09:00.749 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:00.749 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:00.749 #undef SPDK_CONFIG_VFIO_USER 00:09:00.749 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:00.749 #define SPDK_CONFIG_VHOST 1 00:09:00.749 #define SPDK_CONFIG_VIRTIO 1 00:09:00.749 #undef SPDK_CONFIG_VTUNE 00:09:00.749 #define SPDK_CONFIG_VTUNE_DIR 00:09:00.749 #define SPDK_CONFIG_WERROR 1 00:09:00.749 #define SPDK_CONFIG_WPDK_DIR 00:09:00.749 #undef SPDK_CONFIG_XNVME 00:09:00.749 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:00.749 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:00.750 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3437469 ]] 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3437469 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Rx7o24 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Rx7o24/tests/target /tmp/spdk.Rx7o24 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=959328256 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4325101568 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122962968576 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371025408 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6408056832 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64682135552 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685510656 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864495104 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874206720 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9711616 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=394240 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=109568 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64685006848 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685514752 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=507904 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:00.751 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:00.752 * Looking for test storage... 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122962968576 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8622649344 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.752 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:00.752 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.753 11:17:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:07.343 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:07.343 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:07.343 Found net devices under 0000:98:00.0: mlx_0_0 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:07.343 Found net devices under 0000:98:00.1: mlx_0_1 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:07.343 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:07.343 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:07.343 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:09:07.343 altname enp152s0f0np0 00:09:07.343 altname ens817f0np0 00:09:07.343 inet 192.168.100.8/24 scope global mlx_0_0 00:09:07.344 valid_lft forever preferred_lft forever 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:07.344 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:07.344 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:09:07.344 altname enp152s0f1np1 00:09:07.344 altname ens817f1np1 00:09:07.344 inet 192.168.100.9/24 scope global mlx_0_1 00:09:07.344 valid_lft forever preferred_lft forever 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:07.344 192.168.100.9' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:07.344 192.168.100.9' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:07.344 192.168.100.9' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:07.344 11:17:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:07.605 ************************************ 00:09:07.605 START TEST nvmf_filesystem_no_in_capsule 00:09:07.605 ************************************ 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3441144 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3441144 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 3441144 ']' 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:07.605 11:17:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.605 [2024-06-10 11:17:36.416220] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:09:07.605 [2024-06-10 11:17:36.416271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.605 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.605 [2024-06-10 11:17:36.479631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.605 [2024-06-10 11:17:36.553603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.605 [2024-06-10 11:17:36.553640] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.605 [2024-06-10 11:17:36.553648] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.605 [2024-06-10 11:17:36.553654] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.605 [2024-06-10 11:17:36.553660] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.605 [2024-06-10 11:17:36.553822] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.605 [2024-06-10 11:17:36.553879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.605 [2024-06-10 11:17:36.554066] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.605 [2024-06-10 11:17:36.554066] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.549 [2024-06-10 11:17:37.243371] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:08.549 [2024-06-10 11:17:37.273786] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9900b0/0x9945a0) succeed. 00:09:08.549 [2024-06-10 11:17:37.288247] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9916f0/0x9d5c30) succeed. 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.549 Malloc1 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.549 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.810 [2024-06-10 11:17:37.526640] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:09:08.810 { 00:09:08.810 "name": "Malloc1", 00:09:08.810 "aliases": [ 00:09:08.810 "9aa592f6-37ec-4931-ada7-52868b3bf27a" 00:09:08.810 ], 00:09:08.810 "product_name": "Malloc disk", 00:09:08.810 "block_size": 512, 00:09:08.810 "num_blocks": 1048576, 00:09:08.810 "uuid": "9aa592f6-37ec-4931-ada7-52868b3bf27a", 00:09:08.810 "assigned_rate_limits": { 00:09:08.810 "rw_ios_per_sec": 0, 00:09:08.810 "rw_mbytes_per_sec": 0, 00:09:08.810 "r_mbytes_per_sec": 0, 00:09:08.810 "w_mbytes_per_sec": 0 00:09:08.810 }, 00:09:08.810 "claimed": true, 00:09:08.810 "claim_type": "exclusive_write", 00:09:08.810 "zoned": false, 00:09:08.810 "supported_io_types": { 00:09:08.810 "read": true, 00:09:08.810 "write": true, 00:09:08.810 "unmap": true, 00:09:08.810 "write_zeroes": true, 00:09:08.810 "flush": true, 00:09:08.810 "reset": true, 00:09:08.810 "compare": false, 00:09:08.810 "compare_and_write": false, 00:09:08.810 "abort": true, 00:09:08.810 "nvme_admin": false, 00:09:08.810 "nvme_io": false 00:09:08.810 }, 00:09:08.810 "memory_domains": [ 00:09:08.810 { 00:09:08.810 "dma_device_id": "system", 00:09:08.810 "dma_device_type": 1 00:09:08.810 }, 00:09:08.810 { 00:09:08.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.810 "dma_device_type": 2 00:09:08.810 } 00:09:08.810 ], 00:09:08.810 "driver_specific": {} 00:09:08.810 } 00:09:08.810 ]' 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:08.810 11:17:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:10.197 11:17:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.197 11:17:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:09:10.197 11:17:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.197 11:17:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:10.197 11:17:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:09:12.189 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:12.189 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:12.189 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:12.449 11:17:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.512 ************************************ 00:09:13.512 START TEST filesystem_ext4 00:09:13.512 ************************************ 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:13.512 mke2fs 1.46.5 (30-Dec-2021) 00:09:13.512 Discarding device blocks: 0/522240 done 00:09:13.512 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:13.512 Filesystem UUID: 6fafe19a-5dab-4f87-a344-76d5cb66a61e 00:09:13.512 Superblock backups stored on blocks: 00:09:13.512 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:13.512 00:09:13.512 Allocating group tables: 0/64 done 00:09:13.512 Writing inode tables: 0/64 done 00:09:13.512 Creating journal (8192 blocks): done 00:09:13.512 Writing superblocks and filesystem accounting information: 0/64 done 00:09:13.512 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:13.512 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3441144 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.774 00:09:13.774 real 0m0.162s 00:09:13.774 user 0m0.028s 00:09:13.774 sys 0m0.064s 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:13.774 ************************************ 00:09:13.774 END TEST filesystem_ext4 00:09:13.774 ************************************ 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.774 ************************************ 00:09:13.774 START TEST filesystem_btrfs 00:09:13.774 ************************************ 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:13.774 btrfs-progs v6.6.2 00:09:13.774 See https://btrfs.readthedocs.io for more information. 00:09:13.774 00:09:13.774 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:13.774 NOTE: several default settings have changed in version 5.15, please make sure 00:09:13.774 this does not affect your deployments: 00:09:13.774 - DUP for metadata (-m dup) 00:09:13.774 - enabled no-holes (-O no-holes) 00:09:13.774 - enabled free-space-tree (-R free-space-tree) 00:09:13.774 00:09:13.774 Label: (null) 00:09:13.774 UUID: 5b98b459-2896-4cd9-983b-c1281dac304e 00:09:13.774 Node size: 16384 00:09:13.774 Sector size: 4096 00:09:13.774 Filesystem size: 510.00MiB 00:09:13.774 Block group profiles: 00:09:13.774 Data: single 8.00MiB 00:09:13.774 Metadata: DUP 32.00MiB 00:09:13.774 System: DUP 8.00MiB 00:09:13.774 SSD detected: yes 00:09:13.774 Zoned device: no 00:09:13.774 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:13.774 Runtime features: free-space-tree 00:09:13.774 Checksum: crc32c 00:09:13.774 Number of devices: 1 00:09:13.774 Devices: 00:09:13.774 ID SIZE PATH 00:09:13.774 1 510.00MiB /dev/nvme0n1p1 00:09:13.774 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.774 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3441144 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:14.035 00:09:14.035 real 0m0.221s 00:09:14.035 user 0m0.035s 00:09:14.035 sys 0m0.120s 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:14.035 ************************************ 00:09:14.035 END TEST filesystem_btrfs 00:09:14.035 ************************************ 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.035 ************************************ 00:09:14.035 START TEST filesystem_xfs 00:09:14.035 ************************************ 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:14.035 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:14.035 = sectsz=512 attr=2, projid32bit=1 00:09:14.035 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:14.035 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:14.035 data = bsize=4096 blocks=130560, imaxpct=25 00:09:14.035 = sunit=0 swidth=0 blks 00:09:14.035 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:14.035 log =internal log bsize=4096 blocks=16384, version=2 00:09:14.035 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:14.035 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:14.035 Discarding blocks...Done. 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:09:14.035 11:17:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:14.035 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3441144 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:14.295 00:09:14.295 real 0m0.175s 00:09:14.295 user 0m0.016s 00:09:14.295 sys 0m0.077s 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:14.295 ************************************ 00:09:14.295 END TEST filesystem_xfs 00:09:14.295 ************************************ 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:14.295 11:17:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3441144 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 3441144 ']' 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 3441144 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3441144 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3441144' 00:09:15.682 killing process with pid 3441144 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 3441144 00:09:15.682 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 3441144 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:15.943 00:09:15.943 real 0m8.383s 00:09:15.943 user 0m32.833s 00:09:15.943 sys 0m1.046s 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.943 ************************************ 00:09:15.943 END TEST nvmf_filesystem_no_in_capsule 00:09:15.943 ************************************ 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:15.943 ************************************ 00:09:15.943 START TEST nvmf_filesystem_in_capsule 00:09:15.943 ************************************ 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3443036 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3443036 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 3443036 ']' 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:15.943 11:17:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.943 [2024-06-10 11:17:44.875488] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:09:15.943 [2024-06-10 11:17:44.875535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.943 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.204 [2024-06-10 11:17:44.935909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.204 [2024-06-10 11:17:45.002415] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.204 [2024-06-10 11:17:45.002449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.204 [2024-06-10 11:17:45.002460] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.204 [2024-06-10 11:17:45.002467] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.204 [2024-06-10 11:17:45.002472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.204 [2024-06-10 11:17:45.002610] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.204 [2024-06-10 11:17:45.002727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.204 [2024-06-10 11:17:45.002884] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.204 [2024-06-10 11:17:45.002884] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.775 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.775 [2024-06-10 11:17:45.729845] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13cd0b0/0x13d15a0) succeed. 00:09:16.775 [2024-06-10 11:17:45.744694] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13ce6f0/0x1412c30) succeed. 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.036 Malloc1 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.036 [2024-06-10 11:17:45.975535] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.036 11:17:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.036 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.036 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:09:17.036 { 00:09:17.036 "name": "Malloc1", 00:09:17.036 "aliases": [ 00:09:17.036 "310f1ee2-5b56-4dcb-8d4d-77675aaca738" 00:09:17.036 ], 00:09:17.036 "product_name": "Malloc disk", 00:09:17.036 "block_size": 512, 00:09:17.036 "num_blocks": 1048576, 00:09:17.036 "uuid": "310f1ee2-5b56-4dcb-8d4d-77675aaca738", 00:09:17.036 "assigned_rate_limits": { 00:09:17.036 "rw_ios_per_sec": 0, 00:09:17.036 "rw_mbytes_per_sec": 0, 00:09:17.036 "r_mbytes_per_sec": 0, 00:09:17.036 "w_mbytes_per_sec": 0 00:09:17.036 }, 00:09:17.036 "claimed": true, 00:09:17.036 "claim_type": "exclusive_write", 00:09:17.036 "zoned": false, 00:09:17.036 "supported_io_types": { 00:09:17.036 "read": true, 00:09:17.036 "write": true, 00:09:17.036 "unmap": true, 00:09:17.036 "write_zeroes": true, 00:09:17.036 "flush": true, 00:09:17.036 "reset": true, 00:09:17.036 "compare": false, 00:09:17.036 "compare_and_write": false, 00:09:17.036 "abort": true, 00:09:17.036 "nvme_admin": false, 00:09:17.036 "nvme_io": false 00:09:17.036 }, 00:09:17.036 "memory_domains": [ 00:09:17.036 { 00:09:17.036 "dma_device_id": "system", 00:09:17.036 "dma_device_type": 1 00:09:17.036 }, 00:09:17.036 { 00:09:17.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.036 "dma_device_type": 2 00:09:17.036 } 00:09:17.036 ], 00:09:17.036 "driver_specific": {} 00:09:17.036 } 00:09:17.036 ]' 00:09:17.036 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:09:17.297 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:09:17.297 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:09:17.297 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:09:17.297 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:09:17.297 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:09:17.297 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:17.297 11:17:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:18.683 11:17:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.683 11:17:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:09:18.683 11:17:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.683 11:17:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:18.683 11:17:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:09:20.597 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:20.597 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:20.597 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:20.598 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:20.858 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:20.858 11:17:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.800 ************************************ 00:09:21.800 START TEST filesystem_in_capsule_ext4 00:09:21.800 ************************************ 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:09:21.800 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:21.800 mke2fs 1.46.5 (30-Dec-2021) 00:09:21.800 Discarding device blocks: 0/522240 done 00:09:21.800 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:21.800 Filesystem UUID: 1f73b99e-22bc-4107-a260-dd4649a22c84 00:09:21.800 Superblock backups stored on blocks: 00:09:21.800 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:21.800 00:09:21.800 Allocating group tables: 0/64 done 00:09:21.800 Writing inode tables: 0/64 done 00:09:22.061 Creating journal (8192 blocks): done 00:09:22.061 Writing superblocks and filesystem accounting information: 0/64 done 00:09:22.061 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3443036 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:22.061 00:09:22.061 real 0m0.150s 00:09:22.061 user 0m0.028s 00:09:22.061 sys 0m0.063s 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:22.061 ************************************ 00:09:22.061 END TEST filesystem_in_capsule_ext4 00:09:22.061 ************************************ 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.061 ************************************ 00:09:22.061 START TEST filesystem_in_capsule_btrfs 00:09:22.061 ************************************ 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:09:22.061 11:17:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:22.061 btrfs-progs v6.6.2 00:09:22.061 See https://btrfs.readthedocs.io for more information. 00:09:22.061 00:09:22.061 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:22.061 NOTE: several default settings have changed in version 5.15, please make sure 00:09:22.061 this does not affect your deployments: 00:09:22.061 - DUP for metadata (-m dup) 00:09:22.061 - enabled no-holes (-O no-holes) 00:09:22.061 - enabled free-space-tree (-R free-space-tree) 00:09:22.061 00:09:22.061 Label: (null) 00:09:22.061 UUID: 2bd2e07b-b826-4b49-ab41-da254924bad0 00:09:22.061 Node size: 16384 00:09:22.061 Sector size: 4096 00:09:22.061 Filesystem size: 510.00MiB 00:09:22.061 Block group profiles: 00:09:22.061 Data: single 8.00MiB 00:09:22.061 Metadata: DUP 32.00MiB 00:09:22.061 System: DUP 8.00MiB 00:09:22.061 SSD detected: yes 00:09:22.061 Zoned device: no 00:09:22.061 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:22.061 Runtime features: free-space-tree 00:09:22.061 Checksum: crc32c 00:09:22.061 Number of devices: 1 00:09:22.061 Devices: 00:09:22.061 ID SIZE PATH 00:09:22.061 1 510.00MiB /dev/nvme0n1p1 00:09:22.061 00:09:22.061 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:09:22.061 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3443036 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:22.321 00:09:22.321 real 0m0.221s 00:09:22.321 user 0m0.016s 00:09:22.321 sys 0m0.133s 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:22.321 ************************************ 00:09:22.321 END TEST filesystem_in_capsule_btrfs 00:09:22.321 ************************************ 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.321 ************************************ 00:09:22.321 START TEST filesystem_in_capsule_xfs 00:09:22.321 ************************************ 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:09:22.321 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:22.582 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:22.582 = sectsz=512 attr=2, projid32bit=1 00:09:22.582 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:22.582 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:22.582 data = bsize=4096 blocks=130560, imaxpct=25 00:09:22.582 = sunit=0 swidth=0 blks 00:09:22.582 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:22.582 log =internal log bsize=4096 blocks=16384, version=2 00:09:22.582 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:22.582 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:22.582 Discarding blocks...Done. 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3443036 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:22.582 00:09:22.582 real 0m0.159s 00:09:22.582 user 0m0.026s 00:09:22.582 sys 0m0.068s 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.582 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:22.582 ************************************ 00:09:22.582 END TEST filesystem_in_capsule_xfs 00:09:22.582 ************************************ 00:09:22.583 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:22.583 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:22.583 11:17:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3443036 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 3443036 ']' 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 3443036 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:23.966 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3443036 00:09:23.967 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:23.967 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:23.967 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3443036' 00:09:23.967 killing process with pid 3443036 00:09:23.967 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 3443036 00:09:23.967 11:17:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 3443036 00:09:24.227 11:17:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:24.227 00:09:24.227 real 0m8.346s 00:09:24.227 user 0m32.663s 00:09:24.227 sys 0m1.094s 00:09:24.227 11:17:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:24.227 11:17:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.227 ************************************ 00:09:24.227 END TEST nvmf_filesystem_in_capsule 00:09:24.227 ************************************ 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:24.535 rmmod nvme_rdma 00:09:24.535 rmmod nvme_fabrics 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:24.535 00:09:24.535 real 0m24.274s 00:09:24.535 user 1m7.679s 00:09:24.535 sys 0m7.612s 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:24.535 11:17:53 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.535 ************************************ 00:09:24.535 END TEST nvmf_filesystem 00:09:24.535 ************************************ 00:09:24.536 11:17:53 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:24.536 11:17:53 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:24.536 11:17:53 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:24.536 11:17:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:24.536 ************************************ 00:09:24.536 START TEST nvmf_target_discovery 00:09:24.536 ************************************ 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:24.536 * Looking for test storage... 00:09:24.536 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:24.536 11:17:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.121 11:17:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:31.121 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:31.121 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:31.121 Found net devices under 0000:98:00.0: mlx_0_0 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:31.121 Found net devices under 0000:98:00.1: mlx_0_1 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:31.121 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:31.122 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:31.383 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:31.383 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:09:31.383 altname enp152s0f0np0 00:09:31.383 altname ens817f0np0 00:09:31.383 inet 192.168.100.8/24 scope global mlx_0_0 00:09:31.383 valid_lft forever preferred_lft forever 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:31.383 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:31.383 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:09:31.383 altname enp152s0f1np1 00:09:31.383 altname ens817f1np1 00:09:31.383 inet 192.168.100.9/24 scope global mlx_0_1 00:09:31.383 valid_lft forever preferred_lft forever 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:31.383 192.168.100.9' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:31.383 192.168.100.9' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:31.383 192.168.100.9' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3448611 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3448611 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 3448611 ']' 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:31.383 11:18:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.384 [2024-06-10 11:18:00.275634] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:09:31.384 [2024-06-10 11:18:00.275690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.384 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.384 [2024-06-10 11:18:00.338474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.644 [2024-06-10 11:18:00.407103] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.644 [2024-06-10 11:18:00.407139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.644 [2024-06-10 11:18:00.407148] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.644 [2024-06-10 11:18:00.407154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.644 [2024-06-10 11:18:00.407160] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.644 [2024-06-10 11:18:00.407365] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.644 [2024-06-10 11:18:00.407480] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.644 [2024-06-10 11:18:00.407635] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.644 [2024-06-10 11:18:00.407636] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.244 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.244 [2024-06-10 11:18:01.127202] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x193a0b0/0x193e5a0) succeed. 00:09:32.244 [2024-06-10 11:18:01.140353] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x193b6f0/0x197fc30) succeed. 00:09:32.505 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.505 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 Null1 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 [2024-06-10 11:18:01.313628] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 Null2 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 Null3 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 Null4 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.506 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.767 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.767 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 4420 00:09:32.767 00:09:32.767 Discovery Log Number of Records 6, Generation counter 6 00:09:32.767 =====Discovery Log Entry 0====== 00:09:32.767 trtype: rdma 00:09:32.767 adrfam: ipv4 00:09:32.767 subtype: current discovery subsystem 00:09:32.767 treq: not required 00:09:32.767 portid: 0 00:09:32.767 trsvcid: 4420 00:09:32.767 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:32.767 traddr: 192.168.100.8 00:09:32.767 eflags: explicit discovery connections, duplicate discovery information 00:09:32.767 rdma_prtype: not specified 00:09:32.767 rdma_qptype: connected 00:09:32.767 rdma_cms: rdma-cm 00:09:32.767 rdma_pkey: 0x0000 00:09:32.767 =====Discovery Log Entry 1====== 00:09:32.767 trtype: rdma 00:09:32.767 adrfam: ipv4 00:09:32.767 subtype: nvme subsystem 00:09:32.767 treq: not required 00:09:32.767 portid: 0 00:09:32.767 trsvcid: 4420 00:09:32.767 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:32.767 traddr: 192.168.100.8 00:09:32.767 eflags: none 00:09:32.767 rdma_prtype: not specified 00:09:32.767 rdma_qptype: connected 00:09:32.767 rdma_cms: rdma-cm 00:09:32.767 rdma_pkey: 0x0000 00:09:32.767 =====Discovery Log Entry 2====== 00:09:32.767 trtype: rdma 00:09:32.767 adrfam: ipv4 00:09:32.767 subtype: nvme subsystem 00:09:32.767 treq: not required 00:09:32.767 portid: 0 00:09:32.767 trsvcid: 4420 00:09:32.767 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:32.767 traddr: 192.168.100.8 00:09:32.767 eflags: none 00:09:32.767 rdma_prtype: not specified 00:09:32.767 rdma_qptype: connected 00:09:32.767 rdma_cms: rdma-cm 00:09:32.767 rdma_pkey: 0x0000 00:09:32.767 =====Discovery Log Entry 3====== 00:09:32.767 trtype: rdma 00:09:32.767 adrfam: ipv4 00:09:32.767 subtype: nvme subsystem 00:09:32.767 treq: not required 00:09:32.767 portid: 0 00:09:32.767 trsvcid: 4420 00:09:32.767 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:32.767 traddr: 192.168.100.8 00:09:32.767 eflags: none 00:09:32.767 rdma_prtype: not specified 00:09:32.767 rdma_qptype: connected 00:09:32.767 rdma_cms: rdma-cm 00:09:32.767 rdma_pkey: 0x0000 00:09:32.767 =====Discovery Log Entry 4====== 00:09:32.767 trtype: rdma 00:09:32.767 adrfam: ipv4 00:09:32.767 subtype: nvme subsystem 00:09:32.767 treq: not required 00:09:32.767 portid: 0 00:09:32.767 trsvcid: 4420 00:09:32.767 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:32.768 traddr: 192.168.100.8 00:09:32.768 eflags: none 00:09:32.768 rdma_prtype: not specified 00:09:32.768 rdma_qptype: connected 00:09:32.768 rdma_cms: rdma-cm 00:09:32.768 rdma_pkey: 0x0000 00:09:32.768 =====Discovery Log Entry 5====== 00:09:32.768 trtype: rdma 00:09:32.768 adrfam: ipv4 00:09:32.768 subtype: discovery subsystem referral 00:09:32.768 treq: not required 00:09:32.768 portid: 0 00:09:32.768 trsvcid: 4430 00:09:32.768 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:32.768 traddr: 192.168.100.8 00:09:32.768 eflags: none 00:09:32.768 rdma_prtype: unrecognized 00:09:32.768 rdma_qptype: unrecognized 00:09:32.768 rdma_cms: unrecognized 00:09:32.768 rdma_pkey: 0x0000 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:32.768 Perform nvmf subsystem discovery via RPC 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 [ 00:09:32.768 { 00:09:32.768 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:32.768 "subtype": "Discovery", 00:09:32.768 "listen_addresses": [ 00:09:32.768 { 00:09:32.768 "trtype": "RDMA", 00:09:32.768 "adrfam": "IPv4", 00:09:32.768 "traddr": "192.168.100.8", 00:09:32.768 "trsvcid": "4420" 00:09:32.768 } 00:09:32.768 ], 00:09:32.768 "allow_any_host": true, 00:09:32.768 "hosts": [] 00:09:32.768 }, 00:09:32.768 { 00:09:32.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.768 "subtype": "NVMe", 00:09:32.768 "listen_addresses": [ 00:09:32.768 { 00:09:32.768 "trtype": "RDMA", 00:09:32.768 "adrfam": "IPv4", 00:09:32.768 "traddr": "192.168.100.8", 00:09:32.768 "trsvcid": "4420" 00:09:32.768 } 00:09:32.768 ], 00:09:32.768 "allow_any_host": true, 00:09:32.768 "hosts": [], 00:09:32.768 "serial_number": "SPDK00000000000001", 00:09:32.768 "model_number": "SPDK bdev Controller", 00:09:32.768 "max_namespaces": 32, 00:09:32.768 "min_cntlid": 1, 00:09:32.768 "max_cntlid": 65519, 00:09:32.768 "namespaces": [ 00:09:32.768 { 00:09:32.768 "nsid": 1, 00:09:32.768 "bdev_name": "Null1", 00:09:32.768 "name": "Null1", 00:09:32.768 "nguid": "62D21761F3724BD3A795665D1753EA3E", 00:09:32.768 "uuid": "62d21761-f372-4bd3-a795-665d1753ea3e" 00:09:32.768 } 00:09:32.768 ] 00:09:32.768 }, 00:09:32.768 { 00:09:32.768 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:32.768 "subtype": "NVMe", 00:09:32.768 "listen_addresses": [ 00:09:32.768 { 00:09:32.768 "trtype": "RDMA", 00:09:32.768 "adrfam": "IPv4", 00:09:32.768 "traddr": "192.168.100.8", 00:09:32.768 "trsvcid": "4420" 00:09:32.768 } 00:09:32.768 ], 00:09:32.768 "allow_any_host": true, 00:09:32.768 "hosts": [], 00:09:32.768 "serial_number": "SPDK00000000000002", 00:09:32.768 "model_number": "SPDK bdev Controller", 00:09:32.768 "max_namespaces": 32, 00:09:32.768 "min_cntlid": 1, 00:09:32.768 "max_cntlid": 65519, 00:09:32.768 "namespaces": [ 00:09:32.768 { 00:09:32.768 "nsid": 1, 00:09:32.768 "bdev_name": "Null2", 00:09:32.768 "name": "Null2", 00:09:32.768 "nguid": "C037A26BA0024CA8B367629C6EBCFE7E", 00:09:32.768 "uuid": "c037a26b-a002-4ca8-b367-629c6ebcfe7e" 00:09:32.768 } 00:09:32.768 ] 00:09:32.768 }, 00:09:32.768 { 00:09:32.768 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:32.768 "subtype": "NVMe", 00:09:32.768 "listen_addresses": [ 00:09:32.768 { 00:09:32.768 "trtype": "RDMA", 00:09:32.768 "adrfam": "IPv4", 00:09:32.768 "traddr": "192.168.100.8", 00:09:32.768 "trsvcid": "4420" 00:09:32.768 } 00:09:32.768 ], 00:09:32.768 "allow_any_host": true, 00:09:32.768 "hosts": [], 00:09:32.768 "serial_number": "SPDK00000000000003", 00:09:32.768 "model_number": "SPDK bdev Controller", 00:09:32.768 "max_namespaces": 32, 00:09:32.768 "min_cntlid": 1, 00:09:32.768 "max_cntlid": 65519, 00:09:32.768 "namespaces": [ 00:09:32.768 { 00:09:32.768 "nsid": 1, 00:09:32.768 "bdev_name": "Null3", 00:09:32.768 "name": "Null3", 00:09:32.768 "nguid": "3C4AD77D72084DF8A696A404D983655A", 00:09:32.768 "uuid": "3c4ad77d-7208-4df8-a696-a404d983655a" 00:09:32.768 } 00:09:32.768 ] 00:09:32.768 }, 00:09:32.768 { 00:09:32.768 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:32.768 "subtype": "NVMe", 00:09:32.768 "listen_addresses": [ 00:09:32.768 { 00:09:32.768 "trtype": "RDMA", 00:09:32.768 "adrfam": "IPv4", 00:09:32.768 "traddr": "192.168.100.8", 00:09:32.768 "trsvcid": "4420" 00:09:32.768 } 00:09:32.768 ], 00:09:32.768 "allow_any_host": true, 00:09:32.768 "hosts": [], 00:09:32.768 "serial_number": "SPDK00000000000004", 00:09:32.768 "model_number": "SPDK bdev Controller", 00:09:32.768 "max_namespaces": 32, 00:09:32.768 "min_cntlid": 1, 00:09:32.768 "max_cntlid": 65519, 00:09:32.768 "namespaces": [ 00:09:32.768 { 00:09:32.768 "nsid": 1, 00:09:32.768 "bdev_name": "Null4", 00:09:32.768 "name": "Null4", 00:09:32.768 "nguid": "7E15799DDECE4B968759AEA377ADFA53", 00:09:32.768 "uuid": "7e15799d-dece-4b96-8759-aea377adfa53" 00:09:32.768 } 00:09:32.768 ] 00:09:32.768 } 00:09:32.768 ] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.768 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:33.029 rmmod nvme_rdma 00:09:33.029 rmmod nvme_fabrics 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3448611 ']' 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3448611 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 3448611 ']' 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 3448611 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3448611 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3448611' 00:09:33.029 killing process with pid 3448611 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 3448611 00:09:33.029 11:18:01 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 3448611 00:09:33.288 11:18:02 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.288 11:18:02 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:33.288 00:09:33.288 real 0m8.735s 00:09:33.288 user 0m8.561s 00:09:33.288 sys 0m5.474s 00:09:33.288 11:18:02 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:33.288 11:18:02 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.288 ************************************ 00:09:33.288 END TEST nvmf_target_discovery 00:09:33.288 ************************************ 00:09:33.288 11:18:02 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:33.288 11:18:02 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:33.288 11:18:02 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:33.288 11:18:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:33.288 ************************************ 00:09:33.288 START TEST nvmf_referrals 00:09:33.288 ************************************ 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:33.288 * Looking for test storage... 00:09:33.288 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.288 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.547 11:18:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:41.705 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:41.705 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:41.705 Found net devices under 0000:98:00.0: mlx_0_0 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:41.705 Found net devices under 0000:98:00.1: mlx_0_1 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:41.705 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:41.706 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:41.706 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:09:41.706 altname enp152s0f0np0 00:09:41.706 altname ens817f0np0 00:09:41.706 inet 192.168.100.8/24 scope global mlx_0_0 00:09:41.706 valid_lft forever preferred_lft forever 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:41.706 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:41.706 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:09:41.706 altname enp152s0f1np1 00:09:41.706 altname ens817f1np1 00:09:41.706 inet 192.168.100.9/24 scope global mlx_0_1 00:09:41.706 valid_lft forever preferred_lft forever 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:41.706 192.168.100.9' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:41.706 192.168.100.9' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:41.706 192.168.100.9' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3452761 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3452761 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 3452761 ']' 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:41.706 11:18:09 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 [2024-06-10 11:18:09.418178] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:09:41.706 [2024-06-10 11:18:09.418247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.706 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.706 [2024-06-10 11:18:09.485712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.706 [2024-06-10 11:18:09.561554] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.706 [2024-06-10 11:18:09.561597] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.706 [2024-06-10 11:18:09.561604] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.706 [2024-06-10 11:18:09.561615] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.706 [2024-06-10 11:18:09.561621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.706 [2024-06-10 11:18:09.561790] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.706 [2024-06-10 11:18:09.561929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.706 [2024-06-10 11:18:09.561929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.706 [2024-06-10 11:18:09.561876] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 [2024-06-10 11:18:10.280154] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeaa0b0/0xeae5a0) succeed. 00:09:41.706 [2024-06-10 11:18:10.294676] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeab6f0/0xeefc30) succeed. 00:09:41.706 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.707 [2024-06-10 11:18:10.422170] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.707 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.967 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:42.228 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.228 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:42.228 11:18:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.228 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.488 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.749 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:43.010 rmmod nvme_rdma 00:09:43.010 rmmod nvme_fabrics 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3452761 ']' 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3452761 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 3452761 ']' 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 3452761 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3452761 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3452761' 00:09:43.010 killing process with pid 3452761 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 3452761 00:09:43.010 11:18:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 3452761 00:09:43.271 11:18:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.271 11:18:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:43.271 00:09:43.271 real 0m10.028s 00:09:43.271 user 0m13.212s 00:09:43.271 sys 0m6.010s 00:09:43.271 11:18:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:43.271 11:18:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.271 ************************************ 00:09:43.271 END TEST nvmf_referrals 00:09:43.271 ************************************ 00:09:43.272 11:18:12 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:43.272 11:18:12 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:43.272 11:18:12 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:43.272 11:18:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:43.272 ************************************ 00:09:43.272 START TEST nvmf_connect_disconnect 00:09:43.272 ************************************ 00:09:43.272 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:43.533 * Looking for test storage... 00:09:43.533 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:43.533 11:18:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:51.678 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:51.678 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:51.678 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:51.679 Found net devices under 0000:98:00.0: mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:51.679 Found net devices under 0000:98:00.1: mlx_0_1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:51.679 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:51.679 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:09:51.679 altname enp152s0f0np0 00:09:51.679 altname ens817f0np0 00:09:51.679 inet 192.168.100.8/24 scope global mlx_0_0 00:09:51.679 valid_lft forever preferred_lft forever 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:51.679 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:51.679 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:09:51.679 altname enp152s0f1np1 00:09:51.679 altname ens817f1np1 00:09:51.679 inet 192.168.100.9/24 scope global mlx_0_1 00:09:51.679 valid_lft forever preferred_lft forever 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:51.679 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:51.680 192.168.100.9' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:51.680 192.168.100.9' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:51.680 192.168.100.9' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3457670 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3457670 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 3457670 ']' 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:51.680 11:18:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 [2024-06-10 11:18:19.423325] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:09:51.680 [2024-06-10 11:18:19.423375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.680 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.680 [2024-06-10 11:18:19.482681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.680 [2024-06-10 11:18:19.547545] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.680 [2024-06-10 11:18:19.547580] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.680 [2024-06-10 11:18:19.547588] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.680 [2024-06-10 11:18:19.547595] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.680 [2024-06-10 11:18:19.547600] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.680 [2024-06-10 11:18:19.547734] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.680 [2024-06-10 11:18:19.547858] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.680 [2024-06-10 11:18:19.547964] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.680 [2024-06-10 11:18:19.547965] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 [2024-06-10 11:18:20.248434] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:51.680 [2024-06-10 11:18:20.280225] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x25120b0/0x25165a0) succeed. 00:09:51.680 [2024-06-10 11:18:20.294853] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x25136f0/0x2557c30) succeed. 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 [2024-06-10 11:18:20.452294] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:51.680 11:18:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:56.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:15.996 rmmod nvme_rdma 00:10:15.996 rmmod nvme_fabrics 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3457670 ']' 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3457670 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 3457670 ']' 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 3457670 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3457670 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3457670' 00:10:15.996 killing process with pid 3457670 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 3457670 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 3457670 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:15.996 00:10:15.996 real 0m32.180s 00:10:15.996 user 1m40.983s 00:10:15.996 sys 0m6.146s 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:15.996 11:18:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:15.996 ************************************ 00:10:15.996 END TEST nvmf_connect_disconnect 00:10:15.996 ************************************ 00:10:15.996 11:18:44 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:15.996 11:18:44 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:15.996 11:18:44 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:15.996 11:18:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:15.996 ************************************ 00:10:15.996 START TEST nvmf_multitarget 00:10:15.996 ************************************ 00:10:15.996 11:18:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:15.996 * Looking for test storage... 00:10:15.996 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:15.996 11:18:44 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.996 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:15.996 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.996 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:15.997 11:18:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:22.585 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:22.585 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:22.585 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:22.586 Found net devices under 0000:98:00.0: mlx_0_0 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:22.586 Found net devices under 0000:98:00.1: mlx_0_1 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:22.586 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:22.848 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:22.848 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:10:22.848 altname enp152s0f0np0 00:10:22.848 altname ens817f0np0 00:10:22.848 inet 192.168.100.8/24 scope global mlx_0_0 00:10:22.848 valid_lft forever preferred_lft forever 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:22.848 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:22.848 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:10:22.848 altname enp152s0f1np1 00:10:22.848 altname ens817f1np1 00:10:22.848 inet 192.168.100.9/24 scope global mlx_0_1 00:10:22.848 valid_lft forever preferred_lft forever 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:22.848 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:22.849 192.168.100.9' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:22.849 192.168.100.9' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:22.849 192.168.100.9' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3466466 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3466466 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 3466466 ']' 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:22.849 11:18:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 [2024-06-10 11:18:51.790605] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:10:22.849 [2024-06-10 11:18:51.790673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.849 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.110 [2024-06-10 11:18:51.857583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.110 [2024-06-10 11:18:51.932856] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.110 [2024-06-10 11:18:51.932898] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.110 [2024-06-10 11:18:51.932905] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.110 [2024-06-10 11:18:51.932911] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.110 [2024-06-10 11:18:51.932917] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.110 [2024-06-10 11:18:51.933062] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.110 [2024-06-10 11:18:51.933184] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.110 [2024-06-10 11:18:51.933342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.110 [2024-06-10 11:18:51.933343] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:23.681 11:18:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:23.940 11:18:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:23.940 11:18:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:23.940 "nvmf_tgt_1" 00:10:23.941 11:18:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:23.941 "nvmf_tgt_2" 00:10:24.201 11:18:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:24.201 11:18:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:24.201 11:18:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:24.201 11:18:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:24.201 true 00:10:24.201 11:18:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:24.461 true 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:24.461 rmmod nvme_rdma 00:10:24.461 rmmod nvme_fabrics 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3466466 ']' 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3466466 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 3466466 ']' 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 3466466 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:24.461 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3466466 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3466466' 00:10:24.721 killing process with pid 3466466 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 3466466 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 3466466 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:24.721 00:10:24.721 real 0m9.088s 00:10:24.721 user 0m9.377s 00:10:24.721 sys 0m5.702s 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:24.721 11:18:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:24.721 ************************************ 00:10:24.721 END TEST nvmf_multitarget 00:10:24.721 ************************************ 00:10:24.721 11:18:53 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:24.721 11:18:53 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:24.721 11:18:53 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:24.721 11:18:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:24.721 ************************************ 00:10:24.721 START TEST nvmf_rpc 00:10:24.721 ************************************ 00:10:24.721 11:18:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:24.982 * Looking for test storage... 00:10:24.982 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.982 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:24.983 11:18:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:31.571 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:31.571 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:31.571 Found net devices under 0000:98:00.0: mlx_0_0 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:31.571 Found net devices under 0000:98:00.1: mlx_0_1 00:10:31.571 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:31.572 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:31.862 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:31.862 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:10:31.862 altname enp152s0f0np0 00:10:31.862 altname ens817f0np0 00:10:31.862 inet 192.168.100.8/24 scope global mlx_0_0 00:10:31.862 valid_lft forever preferred_lft forever 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:31.862 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:31.862 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:10:31.862 altname enp152s0f1np1 00:10:31.862 altname ens817f1np1 00:10:31.862 inet 192.168.100.9/24 scope global mlx_0_1 00:10:31.862 valid_lft forever preferred_lft forever 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:31.862 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:31.862 192.168.100.9' 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:31.863 192.168.100.9' 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:31.863 192.168.100.9' 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3470567 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3470567 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 3470567 ']' 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:31.863 11:19:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.863 [2024-06-10 11:19:00.744789] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:10:31.863 [2024-06-10 11:19:00.744843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.863 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.863 [2024-06-10 11:19:00.808210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.123 [2024-06-10 11:19:00.877560] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.123 [2024-06-10 11:19:00.877603] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.123 [2024-06-10 11:19:00.877611] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.123 [2024-06-10 11:19:00.877617] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.123 [2024-06-10 11:19:00.877623] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.123 [2024-06-10 11:19:00.877781] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.123 [2024-06-10 11:19:00.877862] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.123 [2024-06-10 11:19:00.878027] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.123 [2024-06-10 11:19:00.878027] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.696 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:32.696 "tick_rate": 2400000000, 00:10:32.696 "poll_groups": [ 00:10:32.696 { 00:10:32.696 "name": "nvmf_tgt_poll_group_000", 00:10:32.696 "admin_qpairs": 0, 00:10:32.696 "io_qpairs": 0, 00:10:32.696 "current_admin_qpairs": 0, 00:10:32.696 "current_io_qpairs": 0, 00:10:32.696 "pending_bdev_io": 0, 00:10:32.696 "completed_nvme_io": 0, 00:10:32.696 "transports": [] 00:10:32.697 }, 00:10:32.697 { 00:10:32.697 "name": "nvmf_tgt_poll_group_001", 00:10:32.697 "admin_qpairs": 0, 00:10:32.697 "io_qpairs": 0, 00:10:32.697 "current_admin_qpairs": 0, 00:10:32.697 "current_io_qpairs": 0, 00:10:32.697 "pending_bdev_io": 0, 00:10:32.697 "completed_nvme_io": 0, 00:10:32.697 "transports": [] 00:10:32.697 }, 00:10:32.697 { 00:10:32.697 "name": "nvmf_tgt_poll_group_002", 00:10:32.697 "admin_qpairs": 0, 00:10:32.697 "io_qpairs": 0, 00:10:32.697 "current_admin_qpairs": 0, 00:10:32.697 "current_io_qpairs": 0, 00:10:32.697 "pending_bdev_io": 0, 00:10:32.697 "completed_nvme_io": 0, 00:10:32.697 "transports": [] 00:10:32.697 }, 00:10:32.697 { 00:10:32.697 "name": "nvmf_tgt_poll_group_003", 00:10:32.697 "admin_qpairs": 0, 00:10:32.697 "io_qpairs": 0, 00:10:32.697 "current_admin_qpairs": 0, 00:10:32.697 "current_io_qpairs": 0, 00:10:32.697 "pending_bdev_io": 0, 00:10:32.697 "completed_nvme_io": 0, 00:10:32.697 "transports": [] 00:10:32.697 } 00:10:32.697 ] 00:10:32.697 }' 00:10:32.697 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:32.697 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:32.697 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:32.697 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:32.697 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:32.697 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.958 [2024-06-10 11:19:01.720085] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x833100/0x8375f0) succeed. 00:10:32.958 [2024-06-10 11:19:01.734817] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x834740/0x878c80) succeed. 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:32.958 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:32.958 "tick_rate": 2400000000, 00:10:32.958 "poll_groups": [ 00:10:32.958 { 00:10:32.958 "name": "nvmf_tgt_poll_group_000", 00:10:32.958 "admin_qpairs": 0, 00:10:32.958 "io_qpairs": 0, 00:10:32.958 "current_admin_qpairs": 0, 00:10:32.959 "current_io_qpairs": 0, 00:10:32.959 "pending_bdev_io": 0, 00:10:32.959 "completed_nvme_io": 0, 00:10:32.959 "transports": [ 00:10:32.959 { 00:10:32.959 "trtype": "RDMA", 00:10:32.959 "pending_data_buffer": 0, 00:10:32.959 "devices": [ 00:10:32.959 { 00:10:32.959 "name": "mlx5_0", 00:10:32.959 "polls": 15930, 00:10:32.959 "idle_polls": 15930, 00:10:32.959 "completions": 0, 00:10:32.959 "requests": 0, 00:10:32.959 "request_latency": 0, 00:10:32.959 "pending_free_request": 0, 00:10:32.959 "pending_rdma_read": 0, 00:10:32.959 "pending_rdma_write": 0, 00:10:32.959 "pending_rdma_send": 0, 00:10:32.959 "total_send_wrs": 0, 00:10:32.959 "send_doorbell_updates": 0, 00:10:32.959 "total_recv_wrs": 4096, 00:10:32.959 "recv_doorbell_updates": 1 00:10:32.959 }, 00:10:32.959 { 00:10:32.959 "name": "mlx5_1", 00:10:32.959 "polls": 15930, 00:10:32.959 "idle_polls": 15930, 00:10:32.959 "completions": 0, 00:10:32.959 "requests": 0, 00:10:32.959 "request_latency": 0, 00:10:32.959 "pending_free_request": 0, 00:10:32.959 "pending_rdma_read": 0, 00:10:32.959 "pending_rdma_write": 0, 00:10:32.959 "pending_rdma_send": 0, 00:10:32.959 "total_send_wrs": 0, 00:10:32.959 "send_doorbell_updates": 0, 00:10:32.959 "total_recv_wrs": 4096, 00:10:32.959 "recv_doorbell_updates": 1 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 }, 00:10:32.959 { 00:10:32.959 "name": "nvmf_tgt_poll_group_001", 00:10:32.959 "admin_qpairs": 0, 00:10:32.959 "io_qpairs": 0, 00:10:32.959 "current_admin_qpairs": 0, 00:10:32.959 "current_io_qpairs": 0, 00:10:32.959 "pending_bdev_io": 0, 00:10:32.959 "completed_nvme_io": 0, 00:10:32.959 "transports": [ 00:10:32.959 { 00:10:32.959 "trtype": "RDMA", 00:10:32.959 "pending_data_buffer": 0, 00:10:32.959 "devices": [ 00:10:32.959 { 00:10:32.959 "name": "mlx5_0", 00:10:32.959 "polls": 16067, 00:10:32.959 "idle_polls": 16067, 00:10:32.959 "completions": 0, 00:10:32.959 "requests": 0, 00:10:32.959 "request_latency": 0, 00:10:32.959 "pending_free_request": 0, 00:10:32.959 "pending_rdma_read": 0, 00:10:32.959 "pending_rdma_write": 0, 00:10:32.959 "pending_rdma_send": 0, 00:10:32.959 "total_send_wrs": 0, 00:10:32.959 "send_doorbell_updates": 0, 00:10:32.959 "total_recv_wrs": 4096, 00:10:32.959 "recv_doorbell_updates": 1 00:10:32.959 }, 00:10:32.959 { 00:10:32.959 "name": "mlx5_1", 00:10:32.959 "polls": 16067, 00:10:32.959 "idle_polls": 16067, 00:10:32.959 "completions": 0, 00:10:32.959 "requests": 0, 00:10:32.959 "request_latency": 0, 00:10:32.959 "pending_free_request": 0, 00:10:32.959 "pending_rdma_read": 0, 00:10:32.959 "pending_rdma_write": 0, 00:10:32.959 "pending_rdma_send": 0, 00:10:32.959 "total_send_wrs": 0, 00:10:32.959 "send_doorbell_updates": 0, 00:10:32.959 "total_recv_wrs": 4096, 00:10:32.959 "recv_doorbell_updates": 1 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 }, 00:10:32.959 { 00:10:32.959 "name": "nvmf_tgt_poll_group_002", 00:10:32.959 "admin_qpairs": 0, 00:10:32.959 "io_qpairs": 0, 00:10:32.959 "current_admin_qpairs": 0, 00:10:32.959 "current_io_qpairs": 0, 00:10:32.959 "pending_bdev_io": 0, 00:10:32.959 "completed_nvme_io": 0, 00:10:32.959 "transports": [ 00:10:32.959 { 00:10:32.959 "trtype": "RDMA", 00:10:32.959 "pending_data_buffer": 0, 00:10:32.959 "devices": [ 00:10:32.959 { 00:10:32.959 "name": "mlx5_0", 00:10:32.959 "polls": 5587, 00:10:32.959 "idle_polls": 5587, 00:10:32.959 "completions": 0, 00:10:32.959 "requests": 0, 00:10:32.959 "request_latency": 0, 00:10:32.959 "pending_free_request": 0, 00:10:32.959 "pending_rdma_read": 0, 00:10:32.959 "pending_rdma_write": 0, 00:10:32.959 "pending_rdma_send": 0, 00:10:32.959 "total_send_wrs": 0, 00:10:32.959 "send_doorbell_updates": 0, 00:10:32.959 "total_recv_wrs": 4096, 00:10:32.959 "recv_doorbell_updates": 1 00:10:32.959 }, 00:10:32.959 { 00:10:32.959 "name": "mlx5_1", 00:10:32.959 "polls": 5587, 00:10:32.959 "idle_polls": 5587, 00:10:32.959 "completions": 0, 00:10:32.959 "requests": 0, 00:10:32.959 "request_latency": 0, 00:10:32.959 "pending_free_request": 0, 00:10:32.959 "pending_rdma_read": 0, 00:10:32.959 "pending_rdma_write": 0, 00:10:32.959 "pending_rdma_send": 0, 00:10:32.959 "total_send_wrs": 0, 00:10:32.959 "send_doorbell_updates": 0, 00:10:32.959 "total_recv_wrs": 4096, 00:10:32.959 "recv_doorbell_updates": 1 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 }, 00:10:32.959 { 00:10:32.959 "name": "nvmf_tgt_poll_group_003", 00:10:32.959 "admin_qpairs": 0, 00:10:32.959 "io_qpairs": 0, 00:10:32.959 "current_admin_qpairs": 0, 00:10:32.959 "current_io_qpairs": 0, 00:10:32.959 "pending_bdev_io": 0, 00:10:32.959 "completed_nvme_io": 0, 00:10:32.959 "transports": [ 00:10:32.959 { 00:10:32.959 "trtype": "RDMA", 00:10:32.959 "pending_data_buffer": 0, 00:10:32.959 "devices": [ 00:10:32.959 { 00:10:32.959 "name": "mlx5_0", 00:10:32.959 "polls": 845, 00:10:32.959 "idle_polls": 845, 00:10:32.959 "completions": 0, 00:10:32.959 "requests": 0, 00:10:32.959 "request_latency": 0, 00:10:32.959 "pending_free_request": 0, 00:10:32.959 "pending_rdma_read": 0, 00:10:32.959 "pending_rdma_write": 0, 00:10:32.959 "pending_rdma_send": 0, 00:10:32.959 "total_send_wrs": 0, 00:10:32.959 "send_doorbell_updates": 0, 00:10:32.959 "total_recv_wrs": 4096, 00:10:32.959 "recv_doorbell_updates": 1 00:10:32.959 }, 00:10:32.959 { 00:10:32.959 "name": "mlx5_1", 00:10:32.959 "polls": 845, 00:10:32.959 "idle_polls": 845, 00:10:32.959 "completions": 0, 00:10:32.959 "requests": 0, 00:10:32.959 "request_latency": 0, 00:10:32.959 "pending_free_request": 0, 00:10:32.959 "pending_rdma_read": 0, 00:10:32.959 "pending_rdma_write": 0, 00:10:32.959 "pending_rdma_send": 0, 00:10:32.959 "total_send_wrs": 0, 00:10:32.959 "send_doorbell_updates": 0, 00:10:32.959 "total_recv_wrs": 4096, 00:10:32.959 "recv_doorbell_updates": 1 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 } 00:10:32.959 ] 00:10:32.959 }' 00:10:32.959 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:32.959 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:32.959 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:32.959 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:10:33.220 11:19:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:33.220 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.221 Malloc1 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.221 [2024-06-10 11:19:02.183531] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 192.168.100.8 -s 4420 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 192.168.100.8 -s 4420 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:33.221 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 192.168.100.8 -s 4420 00:10:33.482 [2024-06-10 11:19:02.239097] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:10:33.482 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:33.482 could not add new controller: failed to write to nvme-fabrics device 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.482 11:19:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:34.869 11:19:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.869 11:19:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:34.869 11:19:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.869 11:19:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:34.869 11:19:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:36.782 11:19:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:36.782 11:19:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:36.782 11:19:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.782 11:19:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:36.782 11:19:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.782 11:19:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:36.782 11:19:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.174 11:19:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:38.174 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:38.174 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:38.174 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.174 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:38.174 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.174 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:10:38.175 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:38.436 [2024-06-10 11:19:07.183101] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:10:38.436 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:38.436 could not add new controller: failed to write to nvme-fabrics device 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.436 11:19:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:39.826 11:19:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.826 11:19:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:39.826 11:19:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.826 11:19:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:39.826 11:19:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:41.742 11:19:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:41.742 11:19:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:41.742 11:19:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.742 11:19:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:41.742 11:19:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.742 11:19:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:41.742 11:19:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.126 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.388 [2024-06-10 11:19:12.116691] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.388 11:19:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:44.775 11:19:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.775 11:19:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:44.775 11:19:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.775 11:19:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:44.775 11:19:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:46.687 11:19:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:46.687 11:19:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:46.687 11:19:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.687 11:19:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:46.687 11:19:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.687 11:19:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:46.687 11:19:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.071 [2024-06-10 11:19:16.887267] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.071 11:19:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:49.456 11:19:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.456 11:19:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:49.456 11:19:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.456 11:19:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:49.456 11:19:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:51.397 11:19:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:51.397 11:19:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:51.397 11:19:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.397 11:19:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:51.397 11:19:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.397 11:19:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:51.397 11:19:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.782 [2024-06-10 11:19:21.657668] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.782 11:19:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:54.166 11:19:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.166 11:19:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:54.166 11:19:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.166 11:19:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:54.166 11:19:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:56.713 11:19:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:56.713 11:19:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:56.713 11:19:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.713 11:19:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:56.713 11:19:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.713 11:19:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:56.713 11:19:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.656 [2024-06-10 11:19:26.474509] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.656 11:19:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:59.042 11:19:27 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.042 11:19:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:59.042 11:19:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.042 11:19:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:59.042 11:19:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:11:01.588 11:19:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:01.588 11:19:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:01.588 11:19:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.588 11:19:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:01.588 11:19:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.588 11:19:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:11:01.588 11:19:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 [2024-06-10 11:19:31.233382] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:02.530 11:19:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:03.959 11:19:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.959 11:19:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:11:03.959 11:19:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.959 11:19:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:11:03.959 11:19:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:11:05.872 11:19:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:05.872 11:19:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:05.872 11:19:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.872 11:19:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:05.872 11:19:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.872 11:19:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:11:05.872 11:19:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.260 11:19:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.260 11:19:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:11:07.260 11:19:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:11:07.260 11:19:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.260 11:19:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:11:07.260 11:19:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 [2024-06-10 11:19:36.050338] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.260 [2024-06-10 11:19:36.110535] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.260 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 [2024-06-10 11:19:36.170724] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.261 [2024-06-10 11:19:36.226935] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.261 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 [2024-06-10 11:19:36.287106] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.523 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:07.523 "tick_rate": 2400000000, 00:11:07.523 "poll_groups": [ 00:11:07.523 { 00:11:07.523 "name": "nvmf_tgt_poll_group_000", 00:11:07.523 "admin_qpairs": 2, 00:11:07.523 "io_qpairs": 27, 00:11:07.523 "current_admin_qpairs": 0, 00:11:07.523 "current_io_qpairs": 0, 00:11:07.523 "pending_bdev_io": 0, 00:11:07.523 "completed_nvme_io": 176, 00:11:07.523 "transports": [ 00:11:07.523 { 00:11:07.523 "trtype": "RDMA", 00:11:07.523 "pending_data_buffer": 0, 00:11:07.523 "devices": [ 00:11:07.523 { 00:11:07.523 "name": "mlx5_0", 00:11:07.523 "polls": 4844301, 00:11:07.523 "idle_polls": 4843903, 00:11:07.523 "completions": 461, 00:11:07.523 "requests": 230, 00:11:07.523 "request_latency": 44841888, 00:11:07.523 "pending_free_request": 0, 00:11:07.523 "pending_rdma_read": 0, 00:11:07.523 "pending_rdma_write": 0, 00:11:07.523 "pending_rdma_send": 0, 00:11:07.523 "total_send_wrs": 405, 00:11:07.523 "send_doorbell_updates": 194, 00:11:07.523 "total_recv_wrs": 4326, 00:11:07.523 "recv_doorbell_updates": 194 00:11:07.523 }, 00:11:07.523 { 00:11:07.523 "name": "mlx5_1", 00:11:07.523 "polls": 4844301, 00:11:07.523 "idle_polls": 4844301, 00:11:07.523 "completions": 0, 00:11:07.523 "requests": 0, 00:11:07.523 "request_latency": 0, 00:11:07.523 "pending_free_request": 0, 00:11:07.523 "pending_rdma_read": 0, 00:11:07.523 "pending_rdma_write": 0, 00:11:07.523 "pending_rdma_send": 0, 00:11:07.523 "total_send_wrs": 0, 00:11:07.523 "send_doorbell_updates": 0, 00:11:07.523 "total_recv_wrs": 4096, 00:11:07.523 "recv_doorbell_updates": 1 00:11:07.523 } 00:11:07.523 ] 00:11:07.523 } 00:11:07.523 ] 00:11:07.523 }, 00:11:07.523 { 00:11:07.523 "name": "nvmf_tgt_poll_group_001", 00:11:07.523 "admin_qpairs": 2, 00:11:07.523 "io_qpairs": 26, 00:11:07.523 "current_admin_qpairs": 0, 00:11:07.523 "current_io_qpairs": 0, 00:11:07.523 "pending_bdev_io": 0, 00:11:07.523 "completed_nvme_io": 124, 00:11:07.523 "transports": [ 00:11:07.523 { 00:11:07.523 "trtype": "RDMA", 00:11:07.523 "pending_data_buffer": 0, 00:11:07.523 "devices": [ 00:11:07.523 { 00:11:07.523 "name": "mlx5_0", 00:11:07.523 "polls": 5343501, 00:11:07.523 "idle_polls": 5343186, 00:11:07.523 "completions": 354, 00:11:07.523 "requests": 177, 00:11:07.523 "request_latency": 28662562, 00:11:07.523 "pending_free_request": 0, 00:11:07.523 "pending_rdma_read": 0, 00:11:07.523 "pending_rdma_write": 0, 00:11:07.523 "pending_rdma_send": 0, 00:11:07.523 "total_send_wrs": 300, 00:11:07.523 "send_doorbell_updates": 153, 00:11:07.523 "total_recv_wrs": 4273, 00:11:07.524 "recv_doorbell_updates": 154 00:11:07.524 }, 00:11:07.524 { 00:11:07.524 "name": "mlx5_1", 00:11:07.524 "polls": 5343501, 00:11:07.524 "idle_polls": 5343501, 00:11:07.524 "completions": 0, 00:11:07.524 "requests": 0, 00:11:07.524 "request_latency": 0, 00:11:07.524 "pending_free_request": 0, 00:11:07.524 "pending_rdma_read": 0, 00:11:07.524 "pending_rdma_write": 0, 00:11:07.524 "pending_rdma_send": 0, 00:11:07.524 "total_send_wrs": 0, 00:11:07.524 "send_doorbell_updates": 0, 00:11:07.524 "total_recv_wrs": 4096, 00:11:07.524 "recv_doorbell_updates": 1 00:11:07.524 } 00:11:07.524 ] 00:11:07.524 } 00:11:07.524 ] 00:11:07.524 }, 00:11:07.524 { 00:11:07.524 "name": "nvmf_tgt_poll_group_002", 00:11:07.524 "admin_qpairs": 1, 00:11:07.524 "io_qpairs": 26, 00:11:07.524 "current_admin_qpairs": 0, 00:11:07.524 "current_io_qpairs": 0, 00:11:07.524 "pending_bdev_io": 0, 00:11:07.524 "completed_nvme_io": 51, 00:11:07.524 "transports": [ 00:11:07.524 { 00:11:07.524 "trtype": "RDMA", 00:11:07.524 "pending_data_buffer": 0, 00:11:07.524 "devices": [ 00:11:07.524 { 00:11:07.524 "name": "mlx5_0", 00:11:07.524 "polls": 4872235, 00:11:07.524 "idle_polls": 4872097, 00:11:07.524 "completions": 159, 00:11:07.524 "requests": 79, 00:11:07.524 "request_latency": 14288628, 00:11:07.524 "pending_free_request": 0, 00:11:07.524 "pending_rdma_read": 0, 00:11:07.524 "pending_rdma_write": 0, 00:11:07.524 "pending_rdma_send": 0, 00:11:07.524 "total_send_wrs": 118, 00:11:07.524 "send_doorbell_updates": 68, 00:11:07.524 "total_recv_wrs": 4175, 00:11:07.524 "recv_doorbell_updates": 68 00:11:07.524 }, 00:11:07.524 { 00:11:07.524 "name": "mlx5_1", 00:11:07.524 "polls": 4872235, 00:11:07.524 "idle_polls": 4872235, 00:11:07.524 "completions": 0, 00:11:07.524 "requests": 0, 00:11:07.524 "request_latency": 0, 00:11:07.524 "pending_free_request": 0, 00:11:07.524 "pending_rdma_read": 0, 00:11:07.524 "pending_rdma_write": 0, 00:11:07.524 "pending_rdma_send": 0, 00:11:07.524 "total_send_wrs": 0, 00:11:07.524 "send_doorbell_updates": 0, 00:11:07.524 "total_recv_wrs": 4096, 00:11:07.524 "recv_doorbell_updates": 1 00:11:07.524 } 00:11:07.524 ] 00:11:07.524 } 00:11:07.524 ] 00:11:07.524 }, 00:11:07.524 { 00:11:07.524 "name": "nvmf_tgt_poll_group_003", 00:11:07.524 "admin_qpairs": 2, 00:11:07.524 "io_qpairs": 26, 00:11:07.524 "current_admin_qpairs": 0, 00:11:07.524 "current_io_qpairs": 0, 00:11:07.524 "pending_bdev_io": 0, 00:11:07.524 "completed_nvme_io": 104, 00:11:07.524 "transports": [ 00:11:07.524 { 00:11:07.524 "trtype": "RDMA", 00:11:07.524 "pending_data_buffer": 0, 00:11:07.524 "devices": [ 00:11:07.524 { 00:11:07.524 "name": "mlx5_0", 00:11:07.524 "polls": 3456106, 00:11:07.524 "idle_polls": 3455812, 00:11:07.524 "completions": 316, 00:11:07.524 "requests": 158, 00:11:07.524 "request_latency": 26936446, 00:11:07.524 "pending_free_request": 0, 00:11:07.524 "pending_rdma_read": 0, 00:11:07.524 "pending_rdma_write": 0, 00:11:07.524 "pending_rdma_send": 0, 00:11:07.524 "total_send_wrs": 262, 00:11:07.524 "send_doorbell_updates": 145, 00:11:07.524 "total_recv_wrs": 4254, 00:11:07.524 "recv_doorbell_updates": 146 00:11:07.524 }, 00:11:07.524 { 00:11:07.524 "name": "mlx5_1", 00:11:07.524 "polls": 3456106, 00:11:07.524 "idle_polls": 3456106, 00:11:07.524 "completions": 0, 00:11:07.524 "requests": 0, 00:11:07.524 "request_latency": 0, 00:11:07.524 "pending_free_request": 0, 00:11:07.524 "pending_rdma_read": 0, 00:11:07.524 "pending_rdma_write": 0, 00:11:07.524 "pending_rdma_send": 0, 00:11:07.524 "total_send_wrs": 0, 00:11:07.524 "send_doorbell_updates": 0, 00:11:07.524 "total_recv_wrs": 4096, 00:11:07.524 "recv_doorbell_updates": 1 00:11:07.524 } 00:11:07.524 ] 00:11:07.524 } 00:11:07.524 ] 00:11:07.524 } 00:11:07.524 ] 00:11:07.524 }' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:11:07.524 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 114729524 > 0 )) 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:07.786 rmmod nvme_rdma 00:11:07.786 rmmod nvme_fabrics 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3470567 ']' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3470567 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 3470567 ']' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 3470567 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3470567 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3470567' 00:11:07.786 killing process with pid 3470567 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 3470567 00:11:07.786 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 3470567 00:11:08.047 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.047 11:19:36 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:08.047 00:11:08.047 real 0m43.235s 00:11:08.047 user 2m25.650s 00:11:08.047 sys 0m6.778s 00:11:08.047 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:08.047 11:19:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.047 ************************************ 00:11:08.047 END TEST nvmf_rpc 00:11:08.047 ************************************ 00:11:08.047 11:19:36 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:08.047 11:19:36 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:08.047 11:19:36 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:08.047 11:19:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:08.047 ************************************ 00:11:08.047 START TEST nvmf_invalid 00:11:08.047 ************************************ 00:11:08.047 11:19:36 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:08.308 * Looking for test storage... 00:11:08.308 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.308 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:08.309 11:19:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:14.956 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.956 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:14.956 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:14.957 Found net devices under 0000:98:00.0: mlx_0_0 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:14.957 Found net devices under 0000:98:00.1: mlx_0_1 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:14.957 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:14.957 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:11:14.957 altname enp152s0f0np0 00:11:14.957 altname ens817f0np0 00:11:14.957 inet 192.168.100.8/24 scope global mlx_0_0 00:11:14.957 valid_lft forever preferred_lft forever 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:14.957 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:14.957 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:11:14.957 altname enp152s0f1np1 00:11:14.957 altname ens817f1np1 00:11:14.957 inet 192.168.100.9/24 scope global mlx_0_1 00:11:14.957 valid_lft forever preferred_lft forever 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:14.957 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:15.219 192.168.100.9' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:15.219 192.168.100.9' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:15.219 192.168.100.9' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:15.219 11:19:43 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3481505 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3481505 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 3481505 ']' 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:15.219 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:15.219 [2024-06-10 11:19:44.077176] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:11:15.219 [2024-06-10 11:19:44.077224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.219 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.219 [2024-06-10 11:19:44.140577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.481 [2024-06-10 11:19:44.211099] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.481 [2024-06-10 11:19:44.211132] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.481 [2024-06-10 11:19:44.211137] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.481 [2024-06-10 11:19:44.211142] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.481 [2024-06-10 11:19:44.211146] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.481 [2024-06-10 11:19:44.211277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.481 [2024-06-10 11:19:44.211399] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.481 [2024-06-10 11:19:44.211555] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.481 [2024-06-10 11:19:44.211557] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.051 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:16.051 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:11:16.051 11:19:44 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:16.051 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:16.051 11:19:44 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:16.051 11:19:44 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.051 11:19:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:16.051 11:19:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27864 00:11:16.312 [2024-06-10 11:19:45.056783] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:16.312 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:16.312 { 00:11:16.312 "nqn": "nqn.2016-06.io.spdk:cnode27864", 00:11:16.312 "tgt_name": "foobar", 00:11:16.312 "method": "nvmf_create_subsystem", 00:11:16.312 "req_id": 1 00:11:16.312 } 00:11:16.312 Got JSON-RPC error response 00:11:16.312 response: 00:11:16.312 { 00:11:16.312 "code": -32603, 00:11:16.312 "message": "Unable to find target foobar" 00:11:16.312 }' 00:11:16.312 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:16.312 { 00:11:16.312 "nqn": "nqn.2016-06.io.spdk:cnode27864", 00:11:16.312 "tgt_name": "foobar", 00:11:16.312 "method": "nvmf_create_subsystem", 00:11:16.312 "req_id": 1 00:11:16.312 } 00:11:16.312 Got JSON-RPC error response 00:11:16.312 response: 00:11:16.312 { 00:11:16.312 "code": -32603, 00:11:16.312 "message": "Unable to find target foobar" 00:11:16.312 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:16.312 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:16.312 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16908 00:11:16.312 [2024-06-10 11:19:45.229414] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16908: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:16.312 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:16.312 { 00:11:16.312 "nqn": "nqn.2016-06.io.spdk:cnode16908", 00:11:16.312 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:16.312 "method": "nvmf_create_subsystem", 00:11:16.312 "req_id": 1 00:11:16.312 } 00:11:16.312 Got JSON-RPC error response 00:11:16.312 response: 00:11:16.312 { 00:11:16.312 "code": -32602, 00:11:16.312 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:16.312 }' 00:11:16.312 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:16.312 { 00:11:16.312 "nqn": "nqn.2016-06.io.spdk:cnode16908", 00:11:16.312 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:16.312 "method": "nvmf_create_subsystem", 00:11:16.312 "req_id": 1 00:11:16.312 } 00:11:16.312 Got JSON-RPC error response 00:11:16.312 response: 00:11:16.312 { 00:11:16.312 "code": -32602, 00:11:16.312 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:16.312 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:16.312 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:16.312 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4043 00:11:16.573 [2024-06-10 11:19:45.397917] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4043: invalid model number 'SPDK_Controller' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:16.573 { 00:11:16.573 "nqn": "nqn.2016-06.io.spdk:cnode4043", 00:11:16.573 "model_number": "SPDK_Controller\u001f", 00:11:16.573 "method": "nvmf_create_subsystem", 00:11:16.573 "req_id": 1 00:11:16.573 } 00:11:16.573 Got JSON-RPC error response 00:11:16.573 response: 00:11:16.573 { 00:11:16.573 "code": -32602, 00:11:16.573 "message": "Invalid MN SPDK_Controller\u001f" 00:11:16.573 }' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:16.573 { 00:11:16.573 "nqn": "nqn.2016-06.io.spdk:cnode4043", 00:11:16.573 "model_number": "SPDK_Controller\u001f", 00:11:16.573 "method": "nvmf_create_subsystem", 00:11:16.573 "req_id": 1 00:11:16.573 } 00:11:16.573 Got JSON-RPC error response 00:11:16.573 response: 00:11:16.573 { 00:11:16.573 "code": -32602, 00:11:16.573 "message": "Invalid MN SPDK_Controller\u001f" 00:11:16.573 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.573 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.574 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:16.834 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'u/uQSqi7,JqZp`Y3bfLi' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'u/uQSqi7,JqZp`Y3bfLi' nqn.2016-06.io.spdk:cnode22286 00:11:16.835 [2024-06-10 11:19:45.730975] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22286: invalid serial number 'u/uQSqi7,JqZp`Y3bfLi' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:16.835 { 00:11:16.835 "nqn": "nqn.2016-06.io.spdk:cnode22286", 00:11:16.835 "serial_number": "u/uQSqi7\u007f,JqZp`Y3bfLi", 00:11:16.835 "method": "nvmf_create_subsystem", 00:11:16.835 "req_id": 1 00:11:16.835 } 00:11:16.835 Got JSON-RPC error response 00:11:16.835 response: 00:11:16.835 { 00:11:16.835 "code": -32602, 00:11:16.835 "message": "Invalid SN u/uQSqi7\u007f,JqZp`Y3bfLi" 00:11:16.835 }' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:16.835 { 00:11:16.835 "nqn": "nqn.2016-06.io.spdk:cnode22286", 00:11:16.835 "serial_number": "u/uQSqi7\u007f,JqZp`Y3bfLi", 00:11:16.835 "method": "nvmf_create_subsystem", 00:11:16.835 "req_id": 1 00:11:16.835 } 00:11:16.835 Got JSON-RPC error response 00:11:16.835 response: 00:11:16.835 { 00:11:16.835 "code": -32602, 00:11:16.835 "message": "Invalid SN u/uQSqi7\u007f,JqZp`Y3bfLi" 00:11:16.835 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:16.835 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:17.095 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:17.096 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:17.096 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:17.096 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.096 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.096 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:17.096 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:17.096 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.097 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'xGZl#2r*!sw[><~F3YNvelH75;w9{+H$m(NI4Gvx' 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'xGZl#2r*!sw[><~F3YNvelH75;w9{+H$m(NI4Gvx' nqn.2016-06.io.spdk:cnode17226 00:11:17.358 [2024-06-10 11:19:46.212531] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17226: invalid model number 'xGZl#2r*!sw[><~F3YNvelH75;w9{+H$m(NI4Gvx' 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:17.358 { 00:11:17.358 "nqn": "nqn.2016-06.io.spdk:cnode17226", 00:11:17.358 "model_number": "xGZl#2r*!sw[><~\u007fF3YNvelH75;w9{+H$m(NI4Gvx", 00:11:17.358 "method": "nvmf_create_subsystem", 00:11:17.358 "req_id": 1 00:11:17.358 } 00:11:17.358 Got JSON-RPC error response 00:11:17.358 response: 00:11:17.358 { 00:11:17.358 "code": -32602, 00:11:17.358 "message": "Invalid MN xGZl#2r*!sw[><~\u007fF3YNvelH75;w9{+H$m(NI4Gvx" 00:11:17.358 }' 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:17.358 { 00:11:17.358 "nqn": "nqn.2016-06.io.spdk:cnode17226", 00:11:17.358 "model_number": "xGZl#2r*!sw[><~\u007fF3YNvelH75;w9{+H$m(NI4Gvx", 00:11:17.358 "method": "nvmf_create_subsystem", 00:11:17.358 "req_id": 1 00:11:17.358 } 00:11:17.358 Got JSON-RPC error response 00:11:17.358 response: 00:11:17.358 { 00:11:17.358 "code": -32602, 00:11:17.358 "message": "Invalid MN xGZl#2r*!sw[><~\u007fF3YNvelH75;w9{+H$m(NI4Gvx" 00:11:17.358 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:17.358 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:11:17.618 [2024-06-10 11:19:46.416209] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6e79e0/0x6ebed0) succeed. 00:11:17.618 [2024-06-10 11:19:46.430829] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6e9020/0x72d560) succeed. 00:11:17.618 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:17.879 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:11:17.879 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:11:17.879 192.168.100.9' 00:11:17.879 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:17.879 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:11:17.879 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:11:18.140 [2024-06-10 11:19:46.889106] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:18.140 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:18.140 { 00:11:18.140 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:18.140 "listen_address": { 00:11:18.140 "trtype": "rdma", 00:11:18.140 "traddr": "192.168.100.8", 00:11:18.140 "trsvcid": "4421" 00:11:18.140 }, 00:11:18.140 "method": "nvmf_subsystem_remove_listener", 00:11:18.140 "req_id": 1 00:11:18.140 } 00:11:18.140 Got JSON-RPC error response 00:11:18.140 response: 00:11:18.140 { 00:11:18.140 "code": -32602, 00:11:18.140 "message": "Invalid parameters" 00:11:18.140 }' 00:11:18.140 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:18.140 { 00:11:18.140 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:18.140 "listen_address": { 00:11:18.140 "trtype": "rdma", 00:11:18.140 "traddr": "192.168.100.8", 00:11:18.140 "trsvcid": "4421" 00:11:18.140 }, 00:11:18.140 "method": "nvmf_subsystem_remove_listener", 00:11:18.140 "req_id": 1 00:11:18.140 } 00:11:18.140 Got JSON-RPC error response 00:11:18.140 response: 00:11:18.140 { 00:11:18.140 "code": -32602, 00:11:18.140 "message": "Invalid parameters" 00:11:18.140 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:18.140 11:19:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16952 -i 0 00:11:18.140 [2024-06-10 11:19:47.061673] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16952: invalid cntlid range [0-65519] 00:11:18.140 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:18.140 { 00:11:18.140 "nqn": "nqn.2016-06.io.spdk:cnode16952", 00:11:18.140 "min_cntlid": 0, 00:11:18.140 "method": "nvmf_create_subsystem", 00:11:18.140 "req_id": 1 00:11:18.140 } 00:11:18.140 Got JSON-RPC error response 00:11:18.140 response: 00:11:18.140 { 00:11:18.140 "code": -32602, 00:11:18.140 "message": "Invalid cntlid range [0-65519]" 00:11:18.140 }' 00:11:18.140 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:18.140 { 00:11:18.140 "nqn": "nqn.2016-06.io.spdk:cnode16952", 00:11:18.140 "min_cntlid": 0, 00:11:18.140 "method": "nvmf_create_subsystem", 00:11:18.140 "req_id": 1 00:11:18.140 } 00:11:18.140 Got JSON-RPC error response 00:11:18.140 response: 00:11:18.140 { 00:11:18.140 "code": -32602, 00:11:18.140 "message": "Invalid cntlid range [0-65519]" 00:11:18.140 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:18.140 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30808 -i 65520 00:11:18.401 [2024-06-10 11:19:47.234291] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30808: invalid cntlid range [65520-65519] 00:11:18.401 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:18.401 { 00:11:18.401 "nqn": "nqn.2016-06.io.spdk:cnode30808", 00:11:18.401 "min_cntlid": 65520, 00:11:18.401 "method": "nvmf_create_subsystem", 00:11:18.401 "req_id": 1 00:11:18.401 } 00:11:18.401 Got JSON-RPC error response 00:11:18.401 response: 00:11:18.401 { 00:11:18.401 "code": -32602, 00:11:18.401 "message": "Invalid cntlid range [65520-65519]" 00:11:18.401 }' 00:11:18.401 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:18.401 { 00:11:18.401 "nqn": "nqn.2016-06.io.spdk:cnode30808", 00:11:18.401 "min_cntlid": 65520, 00:11:18.401 "method": "nvmf_create_subsystem", 00:11:18.401 "req_id": 1 00:11:18.401 } 00:11:18.401 Got JSON-RPC error response 00:11:18.401 response: 00:11:18.401 { 00:11:18.401 "code": -32602, 00:11:18.401 "message": "Invalid cntlid range [65520-65519]" 00:11:18.401 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:18.401 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7475 -I 0 00:11:18.663 [2024-06-10 11:19:47.398879] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7475: invalid cntlid range [1-0] 00:11:18.663 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:18.663 { 00:11:18.663 "nqn": "nqn.2016-06.io.spdk:cnode7475", 00:11:18.663 "max_cntlid": 0, 00:11:18.663 "method": "nvmf_create_subsystem", 00:11:18.663 "req_id": 1 00:11:18.663 } 00:11:18.663 Got JSON-RPC error response 00:11:18.663 response: 00:11:18.663 { 00:11:18.663 "code": -32602, 00:11:18.663 "message": "Invalid cntlid range [1-0]" 00:11:18.663 }' 00:11:18.663 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:18.663 { 00:11:18.663 "nqn": "nqn.2016-06.io.spdk:cnode7475", 00:11:18.663 "max_cntlid": 0, 00:11:18.663 "method": "nvmf_create_subsystem", 00:11:18.663 "req_id": 1 00:11:18.663 } 00:11:18.663 Got JSON-RPC error response 00:11:18.663 response: 00:11:18.663 { 00:11:18.663 "code": -32602, 00:11:18.663 "message": "Invalid cntlid range [1-0]" 00:11:18.663 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:18.663 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20946 -I 65520 00:11:18.663 [2024-06-10 11:19:47.571516] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20946: invalid cntlid range [1-65520] 00:11:18.663 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:18.663 { 00:11:18.663 "nqn": "nqn.2016-06.io.spdk:cnode20946", 00:11:18.663 "max_cntlid": 65520, 00:11:18.663 "method": "nvmf_create_subsystem", 00:11:18.663 "req_id": 1 00:11:18.663 } 00:11:18.663 Got JSON-RPC error response 00:11:18.663 response: 00:11:18.663 { 00:11:18.663 "code": -32602, 00:11:18.663 "message": "Invalid cntlid range [1-65520]" 00:11:18.663 }' 00:11:18.663 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:18.663 { 00:11:18.663 "nqn": "nqn.2016-06.io.spdk:cnode20946", 00:11:18.663 "max_cntlid": 65520, 00:11:18.663 "method": "nvmf_create_subsystem", 00:11:18.663 "req_id": 1 00:11:18.663 } 00:11:18.663 Got JSON-RPC error response 00:11:18.663 response: 00:11:18.663 { 00:11:18.663 "code": -32602, 00:11:18.663 "message": "Invalid cntlid range [1-65520]" 00:11:18.663 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:18.663 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24018 -i 6 -I 5 00:11:18.925 [2024-06-10 11:19:47.744138] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24018: invalid cntlid range [6-5] 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:18.925 { 00:11:18.925 "nqn": "nqn.2016-06.io.spdk:cnode24018", 00:11:18.925 "min_cntlid": 6, 00:11:18.925 "max_cntlid": 5, 00:11:18.925 "method": "nvmf_create_subsystem", 00:11:18.925 "req_id": 1 00:11:18.925 } 00:11:18.925 Got JSON-RPC error response 00:11:18.925 response: 00:11:18.925 { 00:11:18.925 "code": -32602, 00:11:18.925 "message": "Invalid cntlid range [6-5]" 00:11:18.925 }' 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:18.925 { 00:11:18.925 "nqn": "nqn.2016-06.io.spdk:cnode24018", 00:11:18.925 "min_cntlid": 6, 00:11:18.925 "max_cntlid": 5, 00:11:18.925 "method": "nvmf_create_subsystem", 00:11:18.925 "req_id": 1 00:11:18.925 } 00:11:18.925 Got JSON-RPC error response 00:11:18.925 response: 00:11:18.925 { 00:11:18.925 "code": -32602, 00:11:18.925 "message": "Invalid cntlid range [6-5]" 00:11:18.925 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:18.925 { 00:11:18.925 "name": "foobar", 00:11:18.925 "method": "nvmf_delete_target", 00:11:18.925 "req_id": 1 00:11:18.925 } 00:11:18.925 Got JSON-RPC error response 00:11:18.925 response: 00:11:18.925 { 00:11:18.925 "code": -32602, 00:11:18.925 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:18.925 }' 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:18.925 { 00:11:18.925 "name": "foobar", 00:11:18.925 "method": "nvmf_delete_target", 00:11:18.925 "req_id": 1 00:11:18.925 } 00:11:18.925 Got JSON-RPC error response 00:11:18.925 response: 00:11:18.925 { 00:11:18.925 "code": -32602, 00:11:18.925 "message": "The specified target doesn't exist, cannot delete it." 00:11:18.925 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.925 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:18.925 rmmod nvme_rdma 00:11:19.187 rmmod nvme_fabrics 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3481505 ']' 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3481505 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 3481505 ']' 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 3481505 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3481505 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3481505' 00:11:19.187 killing process with pid 3481505 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 3481505 00:11:19.187 11:19:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 3481505 00:11:19.448 11:19:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.448 11:19:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:19.448 00:11:19.448 real 0m11.256s 00:11:19.448 user 0m20.009s 00:11:19.448 sys 0m6.137s 00:11:19.448 11:19:48 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:19.448 11:19:48 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:19.448 ************************************ 00:11:19.448 END TEST nvmf_invalid 00:11:19.448 ************************************ 00:11:19.448 11:19:48 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:11:19.448 11:19:48 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:19.448 11:19:48 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:19.448 11:19:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:19.448 ************************************ 00:11:19.448 START TEST nvmf_abort 00:11:19.448 ************************************ 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:11:19.448 * Looking for test storage... 00:11:19.448 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.448 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.708 11:19:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:26.496 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:26.496 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:26.496 Found net devices under 0000:98:00.0: mlx_0_0 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.496 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:26.497 Found net devices under 0000:98:00.1: mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:26.497 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.497 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:11:26.497 altname enp152s0f0np0 00:11:26.497 altname ens817f0np0 00:11:26.497 inet 192.168.100.8/24 scope global mlx_0_0 00:11:26.497 valid_lft forever preferred_lft forever 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:26.497 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.497 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:11:26.497 altname enp152s0f1np1 00:11:26.497 altname ens817f1np1 00:11:26.497 inet 192.168.100.9/24 scope global mlx_0_1 00:11:26.497 valid_lft forever preferred_lft forever 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:26.497 192.168.100.9' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:26.497 192.168.100.9' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:26.497 192.168.100.9' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:26.497 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:26.758 11:19:55 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:26.758 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.758 11:19:55 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:26.758 11:19:55 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:26.758 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3486360 00:11:26.758 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3486360 00:11:26.758 11:19:55 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:26.759 11:19:55 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 3486360 ']' 00:11:26.759 11:19:55 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.759 11:19:55 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:26.759 11:19:55 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.759 11:19:55 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:26.759 11:19:55 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:26.759 [2024-06-10 11:19:55.546701] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:11:26.759 [2024-06-10 11:19:55.546749] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.759 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.759 [2024-06-10 11:19:55.624748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:26.759 [2024-06-10 11:19:55.700989] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.759 [2024-06-10 11:19:55.701045] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.759 [2024-06-10 11:19:55.701053] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.759 [2024-06-10 11:19:55.701059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.759 [2024-06-10 11:19:55.701065] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.759 [2024-06-10 11:19:55.701192] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.759 [2024-06-10 11:19:55.701355] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.759 [2024-06-10 11:19:55.701356] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.698 [2024-06-10 11:19:56.397162] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7b3840/0x7b7d30) succeed. 00:11:27.698 [2024-06-10 11:19:56.410991] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7b4de0/0x7f93c0) succeed. 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.698 Malloc0 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.698 Delay0 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.698 [2024-06-10 11:19:56.576536] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.698 11:19:56 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:27.698 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.959 [2024-06-10 11:19:56.676544] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:29.869 Initializing NVMe Controllers 00:11:29.869 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:11:29.869 controller IO queue size 128 less than required 00:11:29.869 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:29.869 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:29.870 Initialization complete. Launching workers. 00:11:29.870 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37979 00:11:29.870 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38040, failed to submit 62 00:11:29.870 success 37980, unsuccess 60, failed 0 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.870 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:29.870 rmmod nvme_rdma 00:11:29.870 rmmod nvme_fabrics 00:11:30.131 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.131 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:30.131 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:30.131 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3486360 ']' 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3486360 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 3486360 ']' 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 3486360 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3486360 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3486360' 00:11:30.132 killing process with pid 3486360 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@968 -- # kill 3486360 00:11:30.132 11:19:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@973 -- # wait 3486360 00:11:30.393 11:19:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.393 11:19:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:30.393 00:11:30.393 real 0m10.818s 00:11:30.393 user 0m14.451s 00:11:30.393 sys 0m5.644s 00:11:30.393 11:19:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:30.393 11:19:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:30.393 ************************************ 00:11:30.394 END TEST nvmf_abort 00:11:30.394 ************************************ 00:11:30.394 11:19:59 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:11:30.394 11:19:59 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:30.394 11:19:59 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:30.394 11:19:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:30.394 ************************************ 00:11:30.394 START TEST nvmf_ns_hotplug_stress 00:11:30.394 ************************************ 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:11:30.394 * Looking for test storage... 00:11:30.394 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.394 11:19:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:38.531 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:38.531 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:38.531 Found net devices under 0000:98:00.0: mlx_0_0 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:38.531 Found net devices under 0000:98:00.1: mlx_0_1 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:38.531 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:38.532 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:38.532 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:11:38.532 altname enp152s0f0np0 00:11:38.532 altname ens817f0np0 00:11:38.532 inet 192.168.100.8/24 scope global mlx_0_0 00:11:38.532 valid_lft forever preferred_lft forever 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:38.532 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:38.532 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:11:38.532 altname enp152s0f1np1 00:11:38.532 altname ens817f1np1 00:11:38.532 inet 192.168.100.9/24 scope global mlx_0_1 00:11:38.532 valid_lft forever preferred_lft forever 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:38.532 192.168.100.9' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:38.532 192.168.100.9' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:38.532 192.168.100.9' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3490748 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3490748 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 3490748 ']' 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:38.532 11:20:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.532 [2024-06-10 11:20:06.488354] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:11:38.532 [2024-06-10 11:20:06.488421] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.533 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.533 [2024-06-10 11:20:06.570433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:38.533 [2024-06-10 11:20:06.666092] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.533 [2024-06-10 11:20:06.666150] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.533 [2024-06-10 11:20:06.666158] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.533 [2024-06-10 11:20:06.666170] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.533 [2024-06-10 11:20:06.666176] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.533 [2024-06-10 11:20:06.666308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.533 [2024-06-10 11:20:06.666473] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.533 [2024-06-10 11:20:06.666474] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.533 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:38.533 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:11:38.533 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.533 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:38.533 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.533 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.533 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:38.533 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:38.533 [2024-06-10 11:20:07.482810] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12e2840/0x12e6d30) succeed. 00:11:38.533 [2024-06-10 11:20:07.496842] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12e3de0/0x13283c0) succeed. 00:11:38.792 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:39.052 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:39.052 [2024-06-10 11:20:07.926150] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:39.052 11:20:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:39.311 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:39.311 Malloc0 00:11:39.572 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:39.572 Delay0 00:11:39.572 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.832 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:39.832 NULL1 00:11:39.832 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:40.092 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3491304 00:11:40.092 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:40.092 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:40.092 11:20:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.092 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.352 Read completed with error (sct=0, sc=11) 00:11:40.352 11:20:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.352 11:20:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:40.352 11:20:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:40.612 [2024-06-10 11:20:09.417816] bdev.c:5000:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:11:40.612 true 00:11:40.612 11:20:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:40.612 11:20:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 11:20:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.551 11:20:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:41.551 11:20:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:41.811 true 00:11:41.811 11:20:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:41.811 11:20:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 11:20:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.751 11:20:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:42.751 11:20:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:43.010 true 00:11:43.010 11:20:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:43.010 11:20:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 11:20:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.951 11:20:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:43.951 11:20:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:44.211 true 00:11:44.211 11:20:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:44.211 11:20:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 11:20:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:45.151 11:20:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:45.151 11:20:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:45.411 true 00:11:45.411 11:20:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:45.411 11:20:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 11:20:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.351 11:20:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:46.351 11:20:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:46.351 true 00:11:46.611 11:20:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:46.611 11:20:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.550 11:20:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.550 11:20:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:47.550 11:20:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:47.550 true 00:11:47.550 11:20:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:47.550 11:20:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.489 11:20:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.749 11:20:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:48.749 11:20:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:48.749 true 00:11:49.010 11:20:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:49.010 11:20:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 11:20:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.844 11:20:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:49.844 11:20:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:50.105 true 00:11:50.105 11:20:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:50.105 11:20:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.048 11:20:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.049 11:20:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:51.049 11:20:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:51.310 true 00:11:51.310 11:20:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:51.310 11:20:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.256 11:20:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.257 11:20:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:52.257 11:20:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:52.518 true 00:11:52.518 11:20:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:52.518 11:20:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 11:20:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.461 11:20:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:53.461 11:20:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:53.461 true 00:11:53.461 11:20:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:53.461 11:20:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 11:20:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.847 11:20:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:54.847 11:20:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:54.847 true 00:11:54.847 11:20:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:54.847 11:20:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.790 11:20:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:56.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:56.051 11:20:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:56.051 11:20:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:56.051 true 00:11:56.051 11:20:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:56.051 11:20:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 11:20:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.054 11:20:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:57.054 11:20:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:57.315 true 00:11:57.315 11:20:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:57.315 11:20:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 11:20:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.255 11:20:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:58.255 11:20:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:58.515 true 00:11:58.515 11:20:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:58.515 11:20:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 11:20:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.454 11:20:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:59.454 11:20:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:59.715 true 00:11:59.715 11:20:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:11:59.715 11:20:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.655 11:20:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.915 11:20:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:00.915 11:20:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:00.915 true 00:12:00.915 11:20:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:00.915 11:20:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.855 11:20:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.115 11:20:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:02.115 11:20:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:02.115 true 00:12:02.115 11:20:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:02.115 11:20:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.056 11:20:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.317 11:20:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:03.317 11:20:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:03.317 true 00:12:03.317 11:20:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:03.317 11:20:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 11:20:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.258 11:20:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:04.258 11:20:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:04.518 true 00:12:04.518 11:20:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:04.518 11:20:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 11:20:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.458 11:20:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:05.458 11:20:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:05.719 true 00:12:05.719 11:20:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:05.719 11:20:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.660 11:20:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.660 11:20:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:06.660 11:20:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:06.920 true 00:12:06.920 11:20:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:06.920 11:20:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 11:20:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.860 11:20:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:07.860 11:20:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:08.120 true 00:12:08.120 11:20:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:08.120 11:20:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 11:20:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.060 11:20:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:09.060 11:20:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:09.321 true 00:12:09.321 11:20:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:09.321 11:20:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 11:20:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.260 11:20:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:10.260 11:20:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:10.520 true 00:12:10.520 11:20:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:10.520 11:20:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.459 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.459 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:11.459 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:11.459 true 00:12:11.719 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:11.719 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.719 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.006 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:12.006 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:12.006 true 00:12:12.006 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:12.006 11:20:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.266 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.526 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:12.526 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:12.526 true 00:12:12.526 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:12.526 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.785 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.045 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:12:13.045 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:12:13.045 true 00:12:13.045 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:13.045 11:20:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.304 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.304 Initializing NVMe Controllers 00:12:13.304 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:13.304 Controller IO queue size 128, less than required. 00:12:13.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:13.304 Controller IO queue size 128, less than required. 00:12:13.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:13.304 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:13.304 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:13.304 Initialization complete. Launching workers. 00:12:13.304 ======================================================== 00:12:13.304 Latency(us) 00:12:13.304 Device Information : IOPS MiB/s Average min max 00:12:13.304 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7318.06 3.57 15830.00 1266.12 1186042.16 00:12:13.304 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40030.75 19.55 3197.31 1424.71 394075.97 00:12:13.304 ======================================================== 00:12:13.304 Total : 47348.80 23.12 5149.77 1266.12 1186042.16 00:12:13.304 00:12:13.304 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:12:13.304 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:12:13.564 true 00:12:13.564 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3491304 00:12:13.564 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3491304) - No such process 00:12:13.564 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3491304 00:12:13.564 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.825 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:13.825 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:13.825 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:13.825 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:13.825 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:13.825 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:14.085 null0 00:12:14.085 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.085 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.085 11:20:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:14.345 null1 00:12:14.345 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.345 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.345 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:14.345 null2 00:12:14.345 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.345 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.345 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:14.604 null3 00:12:14.604 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.604 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.604 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:14.604 null4 00:12:14.864 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.864 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.864 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:14.864 null5 00:12:14.864 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.864 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.864 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:15.125 null6 00:12:15.125 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:15.125 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:15.125 11:20:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:15.125 null7 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.125 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3498240 3498241 3498244 3498245 3498248 3498249 3498251 3498253 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:15.386 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:15.647 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.909 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:16.170 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:16.170 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.170 11:20:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.170 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.431 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:16.432 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:16.692 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:16.693 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:16.953 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.954 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.954 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:17.214 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.214 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.214 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:17.214 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.214 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.214 11:20:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:17.214 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.214 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.214 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:17.214 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:17.214 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:17.214 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:17.214 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.214 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:17.475 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.736 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.996 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:18.256 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.256 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.256 11:20:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:18.256 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.516 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:18.517 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.517 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.517 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.517 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:18.517 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.777 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:18.777 rmmod nvme_rdma 00:12:19.038 rmmod nvme_fabrics 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3490748 ']' 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3490748 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 3490748 ']' 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 3490748 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3490748 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3490748' 00:12:19.038 killing process with pid 3490748 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 3490748 00:12:19.038 11:20:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 3490748 00:12:19.038 11:20:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.038 11:20:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:19.038 00:12:19.038 real 0m48.822s 00:12:19.038 user 3m16.950s 00:12:19.038 sys 0m12.196s 00:12:19.038 11:20:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:19.038 11:20:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.038 ************************************ 00:12:19.038 END TEST nvmf_ns_hotplug_stress 00:12:19.038 ************************************ 00:12:19.300 11:20:48 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:19.300 11:20:48 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:19.300 11:20:48 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:19.300 11:20:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:19.300 ************************************ 00:12:19.300 START TEST nvmf_connect_stress 00:12:19.300 ************************************ 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:19.300 * Looking for test storage... 00:12:19.300 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.300 11:20:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:27.437 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:27.437 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:27.437 Found net devices under 0000:98:00.0: mlx_0_0 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:27.437 Found net devices under 0000:98:00.1: mlx_0_1 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:27.437 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:27.438 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.438 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:12:27.438 altname enp152s0f0np0 00:12:27.438 altname ens817f0np0 00:12:27.438 inet 192.168.100.8/24 scope global mlx_0_0 00:12:27.438 valid_lft forever preferred_lft forever 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:27.438 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.438 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:12:27.438 altname enp152s0f1np1 00:12:27.438 altname ens817f1np1 00:12:27.438 inet 192.168.100.9/24 scope global mlx_0_1 00:12:27.438 valid_lft forever preferred_lft forever 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:27.438 192.168.100.9' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:27.438 192.168.100.9' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:27.438 192.168.100.9' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3503083 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3503083 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 3503083 ']' 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:27.438 11:20:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.438 [2024-06-10 11:20:55.336307] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:12:27.438 [2024-06-10 11:20:55.336372] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.438 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.438 [2024-06-10 11:20:55.416562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:27.438 [2024-06-10 11:20:55.509858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.438 [2024-06-10 11:20:55.509914] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.438 [2024-06-10 11:20:55.509922] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.438 [2024-06-10 11:20:55.509929] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.438 [2024-06-10 11:20:55.509935] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.438 [2024-06-10 11:20:55.510068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.438 [2024-06-10 11:20:55.510233] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.438 [2024-06-10 11:20:55.510233] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.438 [2024-06-10 11:20:56.201327] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x229f840/0x22a3d30) succeed. 00:12:27.438 [2024-06-10 11:20:56.215317] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22a0de0/0x22e53c0) succeed. 00:12:27.438 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.439 [2024-06-10 11:20:56.332369] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.439 NULL1 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3503139 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.439 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.699 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.958 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.958 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:27.958 11:20:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.958 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.958 11:20:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.218 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:28.218 11:20:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:28.218 11:20:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.218 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:28.218 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.478 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:28.478 11:20:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:28.478 11:20:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.478 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:28.478 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.047 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.047 11:20:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:29.047 11:20:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.047 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.047 11:20:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.306 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.306 11:20:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:29.307 11:20:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.307 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.307 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.566 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.566 11:20:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:29.566 11:20:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.567 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.567 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.826 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.826 11:20:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:29.826 11:20:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.826 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.826 11:20:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.395 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.395 11:20:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:30.395 11:20:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.395 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.395 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.654 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.654 11:20:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:30.654 11:20:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.654 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.654 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.915 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.915 11:20:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:30.915 11:20:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.915 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.915 11:20:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.175 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.175 11:21:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:31.175 11:21:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.175 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.175 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.435 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.435 11:21:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:31.435 11:21:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.436 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.436 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.006 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.006 11:21:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:32.006 11:21:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.006 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.006 11:21:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.266 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.266 11:21:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:32.266 11:21:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.266 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.266 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.526 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.526 11:21:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:32.526 11:21:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.526 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.526 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.786 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.786 11:21:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:32.786 11:21:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.786 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.786 11:21:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.356 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.356 11:21:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:33.356 11:21:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.356 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.356 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.616 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.616 11:21:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:33.616 11:21:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.616 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.616 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.876 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.876 11:21:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:33.876 11:21:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.876 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.876 11:21:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.136 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.136 11:21:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:34.136 11:21:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.136 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.136 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.444 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.444 11:21:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:34.444 11:21:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.444 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.444 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.746 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.746 11:21:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:34.746 11:21:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.746 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.746 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.318 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.318 11:21:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:35.318 11:21:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.318 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.318 11:21:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.578 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.578 11:21:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:35.578 11:21:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.578 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.578 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.838 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.838 11:21:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:35.839 11:21:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.839 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.839 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.099 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:36.099 11:21:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:36.099 11:21:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.099 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:36.099 11:21:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.360 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:36.360 11:21:05 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:36.360 11:21:05 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.360 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:36.360 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.931 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:36.931 11:21:05 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:36.931 11:21:05 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.931 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:36.931 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.192 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.192 11:21:05 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:37.192 11:21:05 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.192 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.192 11:21:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.453 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.453 11:21:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:37.453 11:21:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.453 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.453 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.727 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3503139 00:12:37.727 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3503139) - No such process 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3503139 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:37.727 rmmod nvme_rdma 00:12:37.727 rmmod nvme_fabrics 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3503083 ']' 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3503083 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 3503083 ']' 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 3503083 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:37.727 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3503083 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3503083' 00:12:37.992 killing process with pid 3503083 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 3503083 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 3503083 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:37.992 00:12:37.992 real 0m18.851s 00:12:37.992 user 0m41.598s 00:12:37.992 sys 0m6.909s 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:37.992 11:21:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.992 ************************************ 00:12:37.992 END TEST nvmf_connect_stress 00:12:37.992 ************************************ 00:12:37.992 11:21:06 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:37.992 11:21:06 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:37.992 11:21:06 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:37.992 11:21:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:38.254 ************************************ 00:12:38.254 START TEST nvmf_fused_ordering 00:12:38.254 ************************************ 00:12:38.254 11:21:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:38.254 * Looking for test storage... 00:12:38.254 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:38.254 11:21:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:44.841 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:44.841 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:44.841 Found net devices under 0000:98:00.0: mlx_0_0 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:44.841 Found net devices under 0000:98:00.1: mlx_0_1 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:44.841 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:45.106 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:45.106 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:12:45.106 altname enp152s0f0np0 00:12:45.106 altname ens817f0np0 00:12:45.106 inet 192.168.100.8/24 scope global mlx_0_0 00:12:45.106 valid_lft forever preferred_lft forever 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:45.106 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:45.106 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:12:45.106 altname enp152s0f1np1 00:12:45.106 altname ens817f1np1 00:12:45.106 inet 192.168.100.9/24 scope global mlx_0_1 00:12:45.106 valid_lft forever preferred_lft forever 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:45.106 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:45.107 192.168.100.9' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:45.107 192.168.100.9' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:45.107 192.168.100.9' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:45.107 11:21:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3509658 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3509658 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 3509658 ']' 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:45.107 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.368 [2024-06-10 11:21:14.077860] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:12:45.368 [2024-06-10 11:21:14.077930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.368 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.368 [2024-06-10 11:21:14.160929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.368 [2024-06-10 11:21:14.253449] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.368 [2024-06-10 11:21:14.253504] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.368 [2024-06-10 11:21:14.253512] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.368 [2024-06-10 11:21:14.253518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.368 [2024-06-10 11:21:14.253524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.368 [2024-06-10 11:21:14.253550] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.939 11:21:14 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.200 [2024-06-10 11:21:14.945626] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6b51d0/0x6b96c0) succeed. 00:12:46.200 [2024-06-10 11:21:14.959348] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6b66d0/0x6fad50) succeed. 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.200 [2024-06-10 11:21:15.039285] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.200 NULL1 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:46.200 11:21:15 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:46.200 [2024-06-10 11:21:15.102419] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:12:46.200 [2024-06-10 11:21:15.102458] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509750 ] 00:12:46.200 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.461 Attached to nqn.2016-06.io.spdk:cnode1 00:12:46.461 Namespace ID: 1 size: 1GB 00:12:46.461 fused_ordering(0) 00:12:46.461 fused_ordering(1) 00:12:46.461 fused_ordering(2) 00:12:46.461 fused_ordering(3) 00:12:46.461 fused_ordering(4) 00:12:46.461 fused_ordering(5) 00:12:46.461 fused_ordering(6) 00:12:46.461 fused_ordering(7) 00:12:46.461 fused_ordering(8) 00:12:46.461 fused_ordering(9) 00:12:46.461 fused_ordering(10) 00:12:46.461 fused_ordering(11) 00:12:46.461 fused_ordering(12) 00:12:46.461 fused_ordering(13) 00:12:46.461 fused_ordering(14) 00:12:46.461 fused_ordering(15) 00:12:46.461 fused_ordering(16) 00:12:46.461 fused_ordering(17) 00:12:46.461 fused_ordering(18) 00:12:46.461 fused_ordering(19) 00:12:46.461 fused_ordering(20) 00:12:46.461 fused_ordering(21) 00:12:46.461 fused_ordering(22) 00:12:46.461 fused_ordering(23) 00:12:46.461 fused_ordering(24) 00:12:46.461 fused_ordering(25) 00:12:46.461 fused_ordering(26) 00:12:46.461 fused_ordering(27) 00:12:46.461 fused_ordering(28) 00:12:46.461 fused_ordering(29) 00:12:46.461 fused_ordering(30) 00:12:46.461 fused_ordering(31) 00:12:46.461 fused_ordering(32) 00:12:46.461 fused_ordering(33) 00:12:46.461 fused_ordering(34) 00:12:46.461 fused_ordering(35) 00:12:46.461 fused_ordering(36) 00:12:46.461 fused_ordering(37) 00:12:46.461 fused_ordering(38) 00:12:46.461 fused_ordering(39) 00:12:46.461 fused_ordering(40) 00:12:46.461 fused_ordering(41) 00:12:46.461 fused_ordering(42) 00:12:46.461 fused_ordering(43) 00:12:46.461 fused_ordering(44) 00:12:46.461 fused_ordering(45) 00:12:46.461 fused_ordering(46) 00:12:46.461 fused_ordering(47) 00:12:46.461 fused_ordering(48) 00:12:46.461 fused_ordering(49) 00:12:46.461 fused_ordering(50) 00:12:46.461 fused_ordering(51) 00:12:46.461 fused_ordering(52) 00:12:46.461 fused_ordering(53) 00:12:46.461 fused_ordering(54) 00:12:46.461 fused_ordering(55) 00:12:46.461 fused_ordering(56) 00:12:46.461 fused_ordering(57) 00:12:46.461 fused_ordering(58) 00:12:46.461 fused_ordering(59) 00:12:46.461 fused_ordering(60) 00:12:46.461 fused_ordering(61) 00:12:46.461 fused_ordering(62) 00:12:46.461 fused_ordering(63) 00:12:46.461 fused_ordering(64) 00:12:46.461 fused_ordering(65) 00:12:46.461 fused_ordering(66) 00:12:46.461 fused_ordering(67) 00:12:46.461 fused_ordering(68) 00:12:46.461 fused_ordering(69) 00:12:46.461 fused_ordering(70) 00:12:46.461 fused_ordering(71) 00:12:46.461 fused_ordering(72) 00:12:46.461 fused_ordering(73) 00:12:46.461 fused_ordering(74) 00:12:46.461 fused_ordering(75) 00:12:46.461 fused_ordering(76) 00:12:46.461 fused_ordering(77) 00:12:46.461 fused_ordering(78) 00:12:46.461 fused_ordering(79) 00:12:46.461 fused_ordering(80) 00:12:46.461 fused_ordering(81) 00:12:46.461 fused_ordering(82) 00:12:46.461 fused_ordering(83) 00:12:46.461 fused_ordering(84) 00:12:46.461 fused_ordering(85) 00:12:46.461 fused_ordering(86) 00:12:46.461 fused_ordering(87) 00:12:46.461 fused_ordering(88) 00:12:46.461 fused_ordering(89) 00:12:46.461 fused_ordering(90) 00:12:46.461 fused_ordering(91) 00:12:46.461 fused_ordering(92) 00:12:46.461 fused_ordering(93) 00:12:46.461 fused_ordering(94) 00:12:46.461 fused_ordering(95) 00:12:46.461 fused_ordering(96) 00:12:46.461 fused_ordering(97) 00:12:46.461 fused_ordering(98) 00:12:46.461 fused_ordering(99) 00:12:46.461 fused_ordering(100) 00:12:46.461 fused_ordering(101) 00:12:46.461 fused_ordering(102) 00:12:46.461 fused_ordering(103) 00:12:46.461 fused_ordering(104) 00:12:46.461 fused_ordering(105) 00:12:46.461 fused_ordering(106) 00:12:46.461 fused_ordering(107) 00:12:46.461 fused_ordering(108) 00:12:46.461 fused_ordering(109) 00:12:46.461 fused_ordering(110) 00:12:46.461 fused_ordering(111) 00:12:46.461 fused_ordering(112) 00:12:46.461 fused_ordering(113) 00:12:46.461 fused_ordering(114) 00:12:46.461 fused_ordering(115) 00:12:46.461 fused_ordering(116) 00:12:46.461 fused_ordering(117) 00:12:46.461 fused_ordering(118) 00:12:46.462 fused_ordering(119) 00:12:46.462 fused_ordering(120) 00:12:46.462 fused_ordering(121) 00:12:46.462 fused_ordering(122) 00:12:46.462 fused_ordering(123) 00:12:46.462 fused_ordering(124) 00:12:46.462 fused_ordering(125) 00:12:46.462 fused_ordering(126) 00:12:46.462 fused_ordering(127) 00:12:46.462 fused_ordering(128) 00:12:46.462 fused_ordering(129) 00:12:46.462 fused_ordering(130) 00:12:46.462 fused_ordering(131) 00:12:46.462 fused_ordering(132) 00:12:46.462 fused_ordering(133) 00:12:46.462 fused_ordering(134) 00:12:46.462 fused_ordering(135) 00:12:46.462 fused_ordering(136) 00:12:46.462 fused_ordering(137) 00:12:46.462 fused_ordering(138) 00:12:46.462 fused_ordering(139) 00:12:46.462 fused_ordering(140) 00:12:46.462 fused_ordering(141) 00:12:46.462 fused_ordering(142) 00:12:46.462 fused_ordering(143) 00:12:46.462 fused_ordering(144) 00:12:46.462 fused_ordering(145) 00:12:46.462 fused_ordering(146) 00:12:46.462 fused_ordering(147) 00:12:46.462 fused_ordering(148) 00:12:46.462 fused_ordering(149) 00:12:46.462 fused_ordering(150) 00:12:46.462 fused_ordering(151) 00:12:46.462 fused_ordering(152) 00:12:46.462 fused_ordering(153) 00:12:46.462 fused_ordering(154) 00:12:46.462 fused_ordering(155) 00:12:46.462 fused_ordering(156) 00:12:46.462 fused_ordering(157) 00:12:46.462 fused_ordering(158) 00:12:46.462 fused_ordering(159) 00:12:46.462 fused_ordering(160) 00:12:46.462 fused_ordering(161) 00:12:46.462 fused_ordering(162) 00:12:46.462 fused_ordering(163) 00:12:46.462 fused_ordering(164) 00:12:46.462 fused_ordering(165) 00:12:46.462 fused_ordering(166) 00:12:46.462 fused_ordering(167) 00:12:46.462 fused_ordering(168) 00:12:46.462 fused_ordering(169) 00:12:46.462 fused_ordering(170) 00:12:46.462 fused_ordering(171) 00:12:46.462 fused_ordering(172) 00:12:46.462 fused_ordering(173) 00:12:46.462 fused_ordering(174) 00:12:46.462 fused_ordering(175) 00:12:46.462 fused_ordering(176) 00:12:46.462 fused_ordering(177) 00:12:46.462 fused_ordering(178) 00:12:46.462 fused_ordering(179) 00:12:46.462 fused_ordering(180) 00:12:46.462 fused_ordering(181) 00:12:46.462 fused_ordering(182) 00:12:46.462 fused_ordering(183) 00:12:46.462 fused_ordering(184) 00:12:46.462 fused_ordering(185) 00:12:46.462 fused_ordering(186) 00:12:46.462 fused_ordering(187) 00:12:46.462 fused_ordering(188) 00:12:46.462 fused_ordering(189) 00:12:46.462 fused_ordering(190) 00:12:46.462 fused_ordering(191) 00:12:46.462 fused_ordering(192) 00:12:46.462 fused_ordering(193) 00:12:46.462 fused_ordering(194) 00:12:46.462 fused_ordering(195) 00:12:46.462 fused_ordering(196) 00:12:46.462 fused_ordering(197) 00:12:46.462 fused_ordering(198) 00:12:46.462 fused_ordering(199) 00:12:46.462 fused_ordering(200) 00:12:46.462 fused_ordering(201) 00:12:46.462 fused_ordering(202) 00:12:46.462 fused_ordering(203) 00:12:46.462 fused_ordering(204) 00:12:46.462 fused_ordering(205) 00:12:46.462 fused_ordering(206) 00:12:46.462 fused_ordering(207) 00:12:46.462 fused_ordering(208) 00:12:46.462 fused_ordering(209) 00:12:46.462 fused_ordering(210) 00:12:46.462 fused_ordering(211) 00:12:46.462 fused_ordering(212) 00:12:46.462 fused_ordering(213) 00:12:46.462 fused_ordering(214) 00:12:46.462 fused_ordering(215) 00:12:46.462 fused_ordering(216) 00:12:46.462 fused_ordering(217) 00:12:46.462 fused_ordering(218) 00:12:46.462 fused_ordering(219) 00:12:46.462 fused_ordering(220) 00:12:46.462 fused_ordering(221) 00:12:46.462 fused_ordering(222) 00:12:46.462 fused_ordering(223) 00:12:46.462 fused_ordering(224) 00:12:46.462 fused_ordering(225) 00:12:46.462 fused_ordering(226) 00:12:46.462 fused_ordering(227) 00:12:46.462 fused_ordering(228) 00:12:46.462 fused_ordering(229) 00:12:46.462 fused_ordering(230) 00:12:46.462 fused_ordering(231) 00:12:46.462 fused_ordering(232) 00:12:46.462 fused_ordering(233) 00:12:46.462 fused_ordering(234) 00:12:46.462 fused_ordering(235) 00:12:46.462 fused_ordering(236) 00:12:46.462 fused_ordering(237) 00:12:46.462 fused_ordering(238) 00:12:46.462 fused_ordering(239) 00:12:46.462 fused_ordering(240) 00:12:46.462 fused_ordering(241) 00:12:46.462 fused_ordering(242) 00:12:46.462 fused_ordering(243) 00:12:46.462 fused_ordering(244) 00:12:46.462 fused_ordering(245) 00:12:46.462 fused_ordering(246) 00:12:46.462 fused_ordering(247) 00:12:46.462 fused_ordering(248) 00:12:46.462 fused_ordering(249) 00:12:46.462 fused_ordering(250) 00:12:46.462 fused_ordering(251) 00:12:46.462 fused_ordering(252) 00:12:46.462 fused_ordering(253) 00:12:46.462 fused_ordering(254) 00:12:46.462 fused_ordering(255) 00:12:46.462 fused_ordering(256) 00:12:46.462 fused_ordering(257) 00:12:46.462 fused_ordering(258) 00:12:46.462 fused_ordering(259) 00:12:46.462 fused_ordering(260) 00:12:46.462 fused_ordering(261) 00:12:46.462 fused_ordering(262) 00:12:46.462 fused_ordering(263) 00:12:46.462 fused_ordering(264) 00:12:46.462 fused_ordering(265) 00:12:46.462 fused_ordering(266) 00:12:46.462 fused_ordering(267) 00:12:46.462 fused_ordering(268) 00:12:46.462 fused_ordering(269) 00:12:46.462 fused_ordering(270) 00:12:46.462 fused_ordering(271) 00:12:46.462 fused_ordering(272) 00:12:46.462 fused_ordering(273) 00:12:46.462 fused_ordering(274) 00:12:46.462 fused_ordering(275) 00:12:46.462 fused_ordering(276) 00:12:46.462 fused_ordering(277) 00:12:46.462 fused_ordering(278) 00:12:46.462 fused_ordering(279) 00:12:46.462 fused_ordering(280) 00:12:46.462 fused_ordering(281) 00:12:46.462 fused_ordering(282) 00:12:46.462 fused_ordering(283) 00:12:46.462 fused_ordering(284) 00:12:46.462 fused_ordering(285) 00:12:46.462 fused_ordering(286) 00:12:46.462 fused_ordering(287) 00:12:46.462 fused_ordering(288) 00:12:46.462 fused_ordering(289) 00:12:46.462 fused_ordering(290) 00:12:46.462 fused_ordering(291) 00:12:46.462 fused_ordering(292) 00:12:46.462 fused_ordering(293) 00:12:46.462 fused_ordering(294) 00:12:46.462 fused_ordering(295) 00:12:46.462 fused_ordering(296) 00:12:46.462 fused_ordering(297) 00:12:46.462 fused_ordering(298) 00:12:46.462 fused_ordering(299) 00:12:46.462 fused_ordering(300) 00:12:46.462 fused_ordering(301) 00:12:46.462 fused_ordering(302) 00:12:46.462 fused_ordering(303) 00:12:46.462 fused_ordering(304) 00:12:46.462 fused_ordering(305) 00:12:46.462 fused_ordering(306) 00:12:46.462 fused_ordering(307) 00:12:46.462 fused_ordering(308) 00:12:46.462 fused_ordering(309) 00:12:46.462 fused_ordering(310) 00:12:46.462 fused_ordering(311) 00:12:46.462 fused_ordering(312) 00:12:46.462 fused_ordering(313) 00:12:46.462 fused_ordering(314) 00:12:46.462 fused_ordering(315) 00:12:46.462 fused_ordering(316) 00:12:46.462 fused_ordering(317) 00:12:46.462 fused_ordering(318) 00:12:46.462 fused_ordering(319) 00:12:46.462 fused_ordering(320) 00:12:46.462 fused_ordering(321) 00:12:46.462 fused_ordering(322) 00:12:46.462 fused_ordering(323) 00:12:46.462 fused_ordering(324) 00:12:46.462 fused_ordering(325) 00:12:46.462 fused_ordering(326) 00:12:46.462 fused_ordering(327) 00:12:46.462 fused_ordering(328) 00:12:46.462 fused_ordering(329) 00:12:46.462 fused_ordering(330) 00:12:46.462 fused_ordering(331) 00:12:46.462 fused_ordering(332) 00:12:46.462 fused_ordering(333) 00:12:46.462 fused_ordering(334) 00:12:46.462 fused_ordering(335) 00:12:46.462 fused_ordering(336) 00:12:46.462 fused_ordering(337) 00:12:46.462 fused_ordering(338) 00:12:46.462 fused_ordering(339) 00:12:46.462 fused_ordering(340) 00:12:46.462 fused_ordering(341) 00:12:46.462 fused_ordering(342) 00:12:46.462 fused_ordering(343) 00:12:46.462 fused_ordering(344) 00:12:46.462 fused_ordering(345) 00:12:46.462 fused_ordering(346) 00:12:46.462 fused_ordering(347) 00:12:46.462 fused_ordering(348) 00:12:46.462 fused_ordering(349) 00:12:46.462 fused_ordering(350) 00:12:46.462 fused_ordering(351) 00:12:46.462 fused_ordering(352) 00:12:46.462 fused_ordering(353) 00:12:46.462 fused_ordering(354) 00:12:46.462 fused_ordering(355) 00:12:46.462 fused_ordering(356) 00:12:46.462 fused_ordering(357) 00:12:46.462 fused_ordering(358) 00:12:46.462 fused_ordering(359) 00:12:46.462 fused_ordering(360) 00:12:46.462 fused_ordering(361) 00:12:46.462 fused_ordering(362) 00:12:46.462 fused_ordering(363) 00:12:46.462 fused_ordering(364) 00:12:46.462 fused_ordering(365) 00:12:46.462 fused_ordering(366) 00:12:46.462 fused_ordering(367) 00:12:46.462 fused_ordering(368) 00:12:46.462 fused_ordering(369) 00:12:46.462 fused_ordering(370) 00:12:46.462 fused_ordering(371) 00:12:46.462 fused_ordering(372) 00:12:46.462 fused_ordering(373) 00:12:46.462 fused_ordering(374) 00:12:46.462 fused_ordering(375) 00:12:46.462 fused_ordering(376) 00:12:46.462 fused_ordering(377) 00:12:46.462 fused_ordering(378) 00:12:46.462 fused_ordering(379) 00:12:46.462 fused_ordering(380) 00:12:46.462 fused_ordering(381) 00:12:46.462 fused_ordering(382) 00:12:46.462 fused_ordering(383) 00:12:46.462 fused_ordering(384) 00:12:46.462 fused_ordering(385) 00:12:46.462 fused_ordering(386) 00:12:46.462 fused_ordering(387) 00:12:46.462 fused_ordering(388) 00:12:46.462 fused_ordering(389) 00:12:46.462 fused_ordering(390) 00:12:46.462 fused_ordering(391) 00:12:46.462 fused_ordering(392) 00:12:46.462 fused_ordering(393) 00:12:46.462 fused_ordering(394) 00:12:46.462 fused_ordering(395) 00:12:46.462 fused_ordering(396) 00:12:46.463 fused_ordering(397) 00:12:46.463 fused_ordering(398) 00:12:46.463 fused_ordering(399) 00:12:46.463 fused_ordering(400) 00:12:46.463 fused_ordering(401) 00:12:46.463 fused_ordering(402) 00:12:46.463 fused_ordering(403) 00:12:46.463 fused_ordering(404) 00:12:46.463 fused_ordering(405) 00:12:46.463 fused_ordering(406) 00:12:46.463 fused_ordering(407) 00:12:46.463 fused_ordering(408) 00:12:46.463 fused_ordering(409) 00:12:46.463 fused_ordering(410) 00:12:46.723 fused_ordering(411) 00:12:46.723 fused_ordering(412) 00:12:46.723 fused_ordering(413) 00:12:46.723 fused_ordering(414) 00:12:46.723 fused_ordering(415) 00:12:46.723 fused_ordering(416) 00:12:46.723 fused_ordering(417) 00:12:46.723 fused_ordering(418) 00:12:46.723 fused_ordering(419) 00:12:46.723 fused_ordering(420) 00:12:46.723 fused_ordering(421) 00:12:46.723 fused_ordering(422) 00:12:46.723 fused_ordering(423) 00:12:46.723 fused_ordering(424) 00:12:46.723 fused_ordering(425) 00:12:46.723 fused_ordering(426) 00:12:46.723 fused_ordering(427) 00:12:46.723 fused_ordering(428) 00:12:46.723 fused_ordering(429) 00:12:46.723 fused_ordering(430) 00:12:46.723 fused_ordering(431) 00:12:46.723 fused_ordering(432) 00:12:46.723 fused_ordering(433) 00:12:46.723 fused_ordering(434) 00:12:46.723 fused_ordering(435) 00:12:46.723 fused_ordering(436) 00:12:46.723 fused_ordering(437) 00:12:46.723 fused_ordering(438) 00:12:46.723 fused_ordering(439) 00:12:46.723 fused_ordering(440) 00:12:46.723 fused_ordering(441) 00:12:46.723 fused_ordering(442) 00:12:46.723 fused_ordering(443) 00:12:46.723 fused_ordering(444) 00:12:46.723 fused_ordering(445) 00:12:46.723 fused_ordering(446) 00:12:46.723 fused_ordering(447) 00:12:46.723 fused_ordering(448) 00:12:46.723 fused_ordering(449) 00:12:46.723 fused_ordering(450) 00:12:46.723 fused_ordering(451) 00:12:46.723 fused_ordering(452) 00:12:46.723 fused_ordering(453) 00:12:46.723 fused_ordering(454) 00:12:46.723 fused_ordering(455) 00:12:46.723 fused_ordering(456) 00:12:46.723 fused_ordering(457) 00:12:46.723 fused_ordering(458) 00:12:46.723 fused_ordering(459) 00:12:46.723 fused_ordering(460) 00:12:46.723 fused_ordering(461) 00:12:46.723 fused_ordering(462) 00:12:46.723 fused_ordering(463) 00:12:46.723 fused_ordering(464) 00:12:46.723 fused_ordering(465) 00:12:46.723 fused_ordering(466) 00:12:46.723 fused_ordering(467) 00:12:46.723 fused_ordering(468) 00:12:46.723 fused_ordering(469) 00:12:46.723 fused_ordering(470) 00:12:46.723 fused_ordering(471) 00:12:46.723 fused_ordering(472) 00:12:46.723 fused_ordering(473) 00:12:46.723 fused_ordering(474) 00:12:46.723 fused_ordering(475) 00:12:46.723 fused_ordering(476) 00:12:46.723 fused_ordering(477) 00:12:46.723 fused_ordering(478) 00:12:46.723 fused_ordering(479) 00:12:46.723 fused_ordering(480) 00:12:46.723 fused_ordering(481) 00:12:46.723 fused_ordering(482) 00:12:46.723 fused_ordering(483) 00:12:46.723 fused_ordering(484) 00:12:46.723 fused_ordering(485) 00:12:46.723 fused_ordering(486) 00:12:46.723 fused_ordering(487) 00:12:46.723 fused_ordering(488) 00:12:46.723 fused_ordering(489) 00:12:46.723 fused_ordering(490) 00:12:46.723 fused_ordering(491) 00:12:46.723 fused_ordering(492) 00:12:46.723 fused_ordering(493) 00:12:46.723 fused_ordering(494) 00:12:46.723 fused_ordering(495) 00:12:46.723 fused_ordering(496) 00:12:46.723 fused_ordering(497) 00:12:46.723 fused_ordering(498) 00:12:46.723 fused_ordering(499) 00:12:46.723 fused_ordering(500) 00:12:46.723 fused_ordering(501) 00:12:46.723 fused_ordering(502) 00:12:46.723 fused_ordering(503) 00:12:46.723 fused_ordering(504) 00:12:46.723 fused_ordering(505) 00:12:46.723 fused_ordering(506) 00:12:46.723 fused_ordering(507) 00:12:46.723 fused_ordering(508) 00:12:46.723 fused_ordering(509) 00:12:46.723 fused_ordering(510) 00:12:46.723 fused_ordering(511) 00:12:46.723 fused_ordering(512) 00:12:46.723 fused_ordering(513) 00:12:46.723 fused_ordering(514) 00:12:46.723 fused_ordering(515) 00:12:46.723 fused_ordering(516) 00:12:46.723 fused_ordering(517) 00:12:46.723 fused_ordering(518) 00:12:46.723 fused_ordering(519) 00:12:46.723 fused_ordering(520) 00:12:46.723 fused_ordering(521) 00:12:46.723 fused_ordering(522) 00:12:46.723 fused_ordering(523) 00:12:46.723 fused_ordering(524) 00:12:46.723 fused_ordering(525) 00:12:46.723 fused_ordering(526) 00:12:46.723 fused_ordering(527) 00:12:46.723 fused_ordering(528) 00:12:46.723 fused_ordering(529) 00:12:46.723 fused_ordering(530) 00:12:46.723 fused_ordering(531) 00:12:46.723 fused_ordering(532) 00:12:46.723 fused_ordering(533) 00:12:46.723 fused_ordering(534) 00:12:46.723 fused_ordering(535) 00:12:46.723 fused_ordering(536) 00:12:46.723 fused_ordering(537) 00:12:46.723 fused_ordering(538) 00:12:46.723 fused_ordering(539) 00:12:46.723 fused_ordering(540) 00:12:46.723 fused_ordering(541) 00:12:46.723 fused_ordering(542) 00:12:46.723 fused_ordering(543) 00:12:46.723 fused_ordering(544) 00:12:46.723 fused_ordering(545) 00:12:46.723 fused_ordering(546) 00:12:46.723 fused_ordering(547) 00:12:46.723 fused_ordering(548) 00:12:46.723 fused_ordering(549) 00:12:46.723 fused_ordering(550) 00:12:46.723 fused_ordering(551) 00:12:46.723 fused_ordering(552) 00:12:46.723 fused_ordering(553) 00:12:46.723 fused_ordering(554) 00:12:46.723 fused_ordering(555) 00:12:46.723 fused_ordering(556) 00:12:46.723 fused_ordering(557) 00:12:46.723 fused_ordering(558) 00:12:46.723 fused_ordering(559) 00:12:46.723 fused_ordering(560) 00:12:46.723 fused_ordering(561) 00:12:46.723 fused_ordering(562) 00:12:46.723 fused_ordering(563) 00:12:46.723 fused_ordering(564) 00:12:46.723 fused_ordering(565) 00:12:46.723 fused_ordering(566) 00:12:46.723 fused_ordering(567) 00:12:46.723 fused_ordering(568) 00:12:46.723 fused_ordering(569) 00:12:46.723 fused_ordering(570) 00:12:46.723 fused_ordering(571) 00:12:46.723 fused_ordering(572) 00:12:46.723 fused_ordering(573) 00:12:46.723 fused_ordering(574) 00:12:46.723 fused_ordering(575) 00:12:46.723 fused_ordering(576) 00:12:46.723 fused_ordering(577) 00:12:46.723 fused_ordering(578) 00:12:46.723 fused_ordering(579) 00:12:46.723 fused_ordering(580) 00:12:46.723 fused_ordering(581) 00:12:46.723 fused_ordering(582) 00:12:46.723 fused_ordering(583) 00:12:46.723 fused_ordering(584) 00:12:46.723 fused_ordering(585) 00:12:46.723 fused_ordering(586) 00:12:46.723 fused_ordering(587) 00:12:46.723 fused_ordering(588) 00:12:46.723 fused_ordering(589) 00:12:46.723 fused_ordering(590) 00:12:46.723 fused_ordering(591) 00:12:46.723 fused_ordering(592) 00:12:46.723 fused_ordering(593) 00:12:46.723 fused_ordering(594) 00:12:46.723 fused_ordering(595) 00:12:46.723 fused_ordering(596) 00:12:46.723 fused_ordering(597) 00:12:46.723 fused_ordering(598) 00:12:46.723 fused_ordering(599) 00:12:46.723 fused_ordering(600) 00:12:46.723 fused_ordering(601) 00:12:46.723 fused_ordering(602) 00:12:46.723 fused_ordering(603) 00:12:46.723 fused_ordering(604) 00:12:46.723 fused_ordering(605) 00:12:46.723 fused_ordering(606) 00:12:46.723 fused_ordering(607) 00:12:46.723 fused_ordering(608) 00:12:46.723 fused_ordering(609) 00:12:46.723 fused_ordering(610) 00:12:46.723 fused_ordering(611) 00:12:46.723 fused_ordering(612) 00:12:46.723 fused_ordering(613) 00:12:46.723 fused_ordering(614) 00:12:46.723 fused_ordering(615) 00:12:46.984 fused_ordering(616) 00:12:46.984 fused_ordering(617) 00:12:46.984 fused_ordering(618) 00:12:46.984 fused_ordering(619) 00:12:46.984 fused_ordering(620) 00:12:46.984 fused_ordering(621) 00:12:46.984 fused_ordering(622) 00:12:46.984 fused_ordering(623) 00:12:46.984 fused_ordering(624) 00:12:46.984 fused_ordering(625) 00:12:46.984 fused_ordering(626) 00:12:46.984 fused_ordering(627) 00:12:46.984 fused_ordering(628) 00:12:46.984 fused_ordering(629) 00:12:46.984 fused_ordering(630) 00:12:46.984 fused_ordering(631) 00:12:46.984 fused_ordering(632) 00:12:46.984 fused_ordering(633) 00:12:46.984 fused_ordering(634) 00:12:46.984 fused_ordering(635) 00:12:46.984 fused_ordering(636) 00:12:46.984 fused_ordering(637) 00:12:46.984 fused_ordering(638) 00:12:46.984 fused_ordering(639) 00:12:46.984 fused_ordering(640) 00:12:46.984 fused_ordering(641) 00:12:46.984 fused_ordering(642) 00:12:46.984 fused_ordering(643) 00:12:46.984 fused_ordering(644) 00:12:46.984 fused_ordering(645) 00:12:46.984 fused_ordering(646) 00:12:46.984 fused_ordering(647) 00:12:46.984 fused_ordering(648) 00:12:46.984 fused_ordering(649) 00:12:46.984 fused_ordering(650) 00:12:46.984 fused_ordering(651) 00:12:46.984 fused_ordering(652) 00:12:46.984 fused_ordering(653) 00:12:46.984 fused_ordering(654) 00:12:46.984 fused_ordering(655) 00:12:46.984 fused_ordering(656) 00:12:46.984 fused_ordering(657) 00:12:46.984 fused_ordering(658) 00:12:46.984 fused_ordering(659) 00:12:46.984 fused_ordering(660) 00:12:46.984 fused_ordering(661) 00:12:46.984 fused_ordering(662) 00:12:46.984 fused_ordering(663) 00:12:46.984 fused_ordering(664) 00:12:46.984 fused_ordering(665) 00:12:46.984 fused_ordering(666) 00:12:46.984 fused_ordering(667) 00:12:46.984 fused_ordering(668) 00:12:46.984 fused_ordering(669) 00:12:46.984 fused_ordering(670) 00:12:46.984 fused_ordering(671) 00:12:46.984 fused_ordering(672) 00:12:46.984 fused_ordering(673) 00:12:46.984 fused_ordering(674) 00:12:46.984 fused_ordering(675) 00:12:46.984 fused_ordering(676) 00:12:46.984 fused_ordering(677) 00:12:46.984 fused_ordering(678) 00:12:46.984 fused_ordering(679) 00:12:46.984 fused_ordering(680) 00:12:46.984 fused_ordering(681) 00:12:46.984 fused_ordering(682) 00:12:46.984 fused_ordering(683) 00:12:46.984 fused_ordering(684) 00:12:46.984 fused_ordering(685) 00:12:46.984 fused_ordering(686) 00:12:46.984 fused_ordering(687) 00:12:46.984 fused_ordering(688) 00:12:46.984 fused_ordering(689) 00:12:46.984 fused_ordering(690) 00:12:46.984 fused_ordering(691) 00:12:46.984 fused_ordering(692) 00:12:46.984 fused_ordering(693) 00:12:46.984 fused_ordering(694) 00:12:46.984 fused_ordering(695) 00:12:46.984 fused_ordering(696) 00:12:46.984 fused_ordering(697) 00:12:46.984 fused_ordering(698) 00:12:46.984 fused_ordering(699) 00:12:46.984 fused_ordering(700) 00:12:46.984 fused_ordering(701) 00:12:46.984 fused_ordering(702) 00:12:46.984 fused_ordering(703) 00:12:46.985 fused_ordering(704) 00:12:46.985 fused_ordering(705) 00:12:46.985 fused_ordering(706) 00:12:46.985 fused_ordering(707) 00:12:46.985 fused_ordering(708) 00:12:46.985 fused_ordering(709) 00:12:46.985 fused_ordering(710) 00:12:46.985 fused_ordering(711) 00:12:46.985 fused_ordering(712) 00:12:46.985 fused_ordering(713) 00:12:46.985 fused_ordering(714) 00:12:46.985 fused_ordering(715) 00:12:46.985 fused_ordering(716) 00:12:46.985 fused_ordering(717) 00:12:46.985 fused_ordering(718) 00:12:46.985 fused_ordering(719) 00:12:46.985 fused_ordering(720) 00:12:46.985 fused_ordering(721) 00:12:46.985 fused_ordering(722) 00:12:46.985 fused_ordering(723) 00:12:46.985 fused_ordering(724) 00:12:46.985 fused_ordering(725) 00:12:46.985 fused_ordering(726) 00:12:46.985 fused_ordering(727) 00:12:46.985 fused_ordering(728) 00:12:46.985 fused_ordering(729) 00:12:46.985 fused_ordering(730) 00:12:46.985 fused_ordering(731) 00:12:46.985 fused_ordering(732) 00:12:46.985 fused_ordering(733) 00:12:46.985 fused_ordering(734) 00:12:46.985 fused_ordering(735) 00:12:46.985 fused_ordering(736) 00:12:46.985 fused_ordering(737) 00:12:46.985 fused_ordering(738) 00:12:46.985 fused_ordering(739) 00:12:46.985 fused_ordering(740) 00:12:46.985 fused_ordering(741) 00:12:46.985 fused_ordering(742) 00:12:46.985 fused_ordering(743) 00:12:46.985 fused_ordering(744) 00:12:46.985 fused_ordering(745) 00:12:46.985 fused_ordering(746) 00:12:46.985 fused_ordering(747) 00:12:46.985 fused_ordering(748) 00:12:46.985 fused_ordering(749) 00:12:46.985 fused_ordering(750) 00:12:46.985 fused_ordering(751) 00:12:46.985 fused_ordering(752) 00:12:46.985 fused_ordering(753) 00:12:46.985 fused_ordering(754) 00:12:46.985 fused_ordering(755) 00:12:46.985 fused_ordering(756) 00:12:46.985 fused_ordering(757) 00:12:46.985 fused_ordering(758) 00:12:46.985 fused_ordering(759) 00:12:46.985 fused_ordering(760) 00:12:46.985 fused_ordering(761) 00:12:46.985 fused_ordering(762) 00:12:46.985 fused_ordering(763) 00:12:46.985 fused_ordering(764) 00:12:46.985 fused_ordering(765) 00:12:46.985 fused_ordering(766) 00:12:46.985 fused_ordering(767) 00:12:46.985 fused_ordering(768) 00:12:46.985 fused_ordering(769) 00:12:46.985 fused_ordering(770) 00:12:46.985 fused_ordering(771) 00:12:46.985 fused_ordering(772) 00:12:46.985 fused_ordering(773) 00:12:46.985 fused_ordering(774) 00:12:46.985 fused_ordering(775) 00:12:46.985 fused_ordering(776) 00:12:46.985 fused_ordering(777) 00:12:46.985 fused_ordering(778) 00:12:46.985 fused_ordering(779) 00:12:46.985 fused_ordering(780) 00:12:46.985 fused_ordering(781) 00:12:46.985 fused_ordering(782) 00:12:46.985 fused_ordering(783) 00:12:46.985 fused_ordering(784) 00:12:46.985 fused_ordering(785) 00:12:46.985 fused_ordering(786) 00:12:46.985 fused_ordering(787) 00:12:46.985 fused_ordering(788) 00:12:46.985 fused_ordering(789) 00:12:46.985 fused_ordering(790) 00:12:46.985 fused_ordering(791) 00:12:46.985 fused_ordering(792) 00:12:46.985 fused_ordering(793) 00:12:46.985 fused_ordering(794) 00:12:46.985 fused_ordering(795) 00:12:46.985 fused_ordering(796) 00:12:46.985 fused_ordering(797) 00:12:46.985 fused_ordering(798) 00:12:46.985 fused_ordering(799) 00:12:46.985 fused_ordering(800) 00:12:46.985 fused_ordering(801) 00:12:46.985 fused_ordering(802) 00:12:46.985 fused_ordering(803) 00:12:46.985 fused_ordering(804) 00:12:46.985 fused_ordering(805) 00:12:46.985 fused_ordering(806) 00:12:46.985 fused_ordering(807) 00:12:46.985 fused_ordering(808) 00:12:46.985 fused_ordering(809) 00:12:46.985 fused_ordering(810) 00:12:46.985 fused_ordering(811) 00:12:46.985 fused_ordering(812) 00:12:46.985 fused_ordering(813) 00:12:46.985 fused_ordering(814) 00:12:46.985 fused_ordering(815) 00:12:46.985 fused_ordering(816) 00:12:46.985 fused_ordering(817) 00:12:46.985 fused_ordering(818) 00:12:46.985 fused_ordering(819) 00:12:46.985 fused_ordering(820) 00:12:47.246 fused_ordering(821) 00:12:47.246 fused_ordering(822) 00:12:47.246 fused_ordering(823) 00:12:47.246 fused_ordering(824) 00:12:47.246 fused_ordering(825) 00:12:47.246 fused_ordering(826) 00:12:47.246 fused_ordering(827) 00:12:47.246 fused_ordering(828) 00:12:47.246 fused_ordering(829) 00:12:47.246 fused_ordering(830) 00:12:47.246 fused_ordering(831) 00:12:47.246 fused_ordering(832) 00:12:47.246 fused_ordering(833) 00:12:47.246 fused_ordering(834) 00:12:47.246 fused_ordering(835) 00:12:47.246 fused_ordering(836) 00:12:47.246 fused_ordering(837) 00:12:47.246 fused_ordering(838) 00:12:47.246 fused_ordering(839) 00:12:47.246 fused_ordering(840) 00:12:47.246 fused_ordering(841) 00:12:47.246 fused_ordering(842) 00:12:47.246 fused_ordering(843) 00:12:47.246 fused_ordering(844) 00:12:47.246 fused_ordering(845) 00:12:47.246 fused_ordering(846) 00:12:47.246 fused_ordering(847) 00:12:47.246 fused_ordering(848) 00:12:47.246 fused_ordering(849) 00:12:47.246 fused_ordering(850) 00:12:47.246 fused_ordering(851) 00:12:47.246 fused_ordering(852) 00:12:47.246 fused_ordering(853) 00:12:47.246 fused_ordering(854) 00:12:47.246 fused_ordering(855) 00:12:47.246 fused_ordering(856) 00:12:47.246 fused_ordering(857) 00:12:47.246 fused_ordering(858) 00:12:47.246 fused_ordering(859) 00:12:47.246 fused_ordering(860) 00:12:47.246 fused_ordering(861) 00:12:47.246 fused_ordering(862) 00:12:47.246 fused_ordering(863) 00:12:47.246 fused_ordering(864) 00:12:47.246 fused_ordering(865) 00:12:47.246 fused_ordering(866) 00:12:47.246 fused_ordering(867) 00:12:47.246 fused_ordering(868) 00:12:47.246 fused_ordering(869) 00:12:47.246 fused_ordering(870) 00:12:47.246 fused_ordering(871) 00:12:47.246 fused_ordering(872) 00:12:47.246 fused_ordering(873) 00:12:47.246 fused_ordering(874) 00:12:47.246 fused_ordering(875) 00:12:47.246 fused_ordering(876) 00:12:47.246 fused_ordering(877) 00:12:47.246 fused_ordering(878) 00:12:47.246 fused_ordering(879) 00:12:47.246 fused_ordering(880) 00:12:47.246 fused_ordering(881) 00:12:47.246 fused_ordering(882) 00:12:47.246 fused_ordering(883) 00:12:47.246 fused_ordering(884) 00:12:47.246 fused_ordering(885) 00:12:47.246 fused_ordering(886) 00:12:47.246 fused_ordering(887) 00:12:47.246 fused_ordering(888) 00:12:47.246 fused_ordering(889) 00:12:47.246 fused_ordering(890) 00:12:47.246 fused_ordering(891) 00:12:47.246 fused_ordering(892) 00:12:47.246 fused_ordering(893) 00:12:47.246 fused_ordering(894) 00:12:47.246 fused_ordering(895) 00:12:47.246 fused_ordering(896) 00:12:47.246 fused_ordering(897) 00:12:47.246 fused_ordering(898) 00:12:47.246 fused_ordering(899) 00:12:47.246 fused_ordering(900) 00:12:47.246 fused_ordering(901) 00:12:47.246 fused_ordering(902) 00:12:47.246 fused_ordering(903) 00:12:47.246 fused_ordering(904) 00:12:47.246 fused_ordering(905) 00:12:47.246 fused_ordering(906) 00:12:47.246 fused_ordering(907) 00:12:47.246 fused_ordering(908) 00:12:47.246 fused_ordering(909) 00:12:47.246 fused_ordering(910) 00:12:47.246 fused_ordering(911) 00:12:47.246 fused_ordering(912) 00:12:47.246 fused_ordering(913) 00:12:47.246 fused_ordering(914) 00:12:47.246 fused_ordering(915) 00:12:47.246 fused_ordering(916) 00:12:47.246 fused_ordering(917) 00:12:47.246 fused_ordering(918) 00:12:47.246 fused_ordering(919) 00:12:47.246 fused_ordering(920) 00:12:47.246 fused_ordering(921) 00:12:47.246 fused_ordering(922) 00:12:47.246 fused_ordering(923) 00:12:47.246 fused_ordering(924) 00:12:47.246 fused_ordering(925) 00:12:47.246 fused_ordering(926) 00:12:47.246 fused_ordering(927) 00:12:47.246 fused_ordering(928) 00:12:47.246 fused_ordering(929) 00:12:47.246 fused_ordering(930) 00:12:47.246 fused_ordering(931) 00:12:47.246 fused_ordering(932) 00:12:47.246 fused_ordering(933) 00:12:47.246 fused_ordering(934) 00:12:47.246 fused_ordering(935) 00:12:47.246 fused_ordering(936) 00:12:47.246 fused_ordering(937) 00:12:47.246 fused_ordering(938) 00:12:47.246 fused_ordering(939) 00:12:47.246 fused_ordering(940) 00:12:47.246 fused_ordering(941) 00:12:47.246 fused_ordering(942) 00:12:47.246 fused_ordering(943) 00:12:47.246 fused_ordering(944) 00:12:47.246 fused_ordering(945) 00:12:47.246 fused_ordering(946) 00:12:47.246 fused_ordering(947) 00:12:47.246 fused_ordering(948) 00:12:47.246 fused_ordering(949) 00:12:47.246 fused_ordering(950) 00:12:47.246 fused_ordering(951) 00:12:47.246 fused_ordering(952) 00:12:47.246 fused_ordering(953) 00:12:47.246 fused_ordering(954) 00:12:47.246 fused_ordering(955) 00:12:47.246 fused_ordering(956) 00:12:47.246 fused_ordering(957) 00:12:47.246 fused_ordering(958) 00:12:47.246 fused_ordering(959) 00:12:47.246 fused_ordering(960) 00:12:47.246 fused_ordering(961) 00:12:47.246 fused_ordering(962) 00:12:47.246 fused_ordering(963) 00:12:47.246 fused_ordering(964) 00:12:47.246 fused_ordering(965) 00:12:47.246 fused_ordering(966) 00:12:47.246 fused_ordering(967) 00:12:47.246 fused_ordering(968) 00:12:47.246 fused_ordering(969) 00:12:47.246 fused_ordering(970) 00:12:47.246 fused_ordering(971) 00:12:47.246 fused_ordering(972) 00:12:47.246 fused_ordering(973) 00:12:47.246 fused_ordering(974) 00:12:47.246 fused_ordering(975) 00:12:47.246 fused_ordering(976) 00:12:47.246 fused_ordering(977) 00:12:47.246 fused_ordering(978) 00:12:47.246 fused_ordering(979) 00:12:47.246 fused_ordering(980) 00:12:47.246 fused_ordering(981) 00:12:47.246 fused_ordering(982) 00:12:47.246 fused_ordering(983) 00:12:47.246 fused_ordering(984) 00:12:47.246 fused_ordering(985) 00:12:47.246 fused_ordering(986) 00:12:47.246 fused_ordering(987) 00:12:47.246 fused_ordering(988) 00:12:47.246 fused_ordering(989) 00:12:47.246 fused_ordering(990) 00:12:47.246 fused_ordering(991) 00:12:47.246 fused_ordering(992) 00:12:47.246 fused_ordering(993) 00:12:47.247 fused_ordering(994) 00:12:47.247 fused_ordering(995) 00:12:47.247 fused_ordering(996) 00:12:47.247 fused_ordering(997) 00:12:47.247 fused_ordering(998) 00:12:47.247 fused_ordering(999) 00:12:47.247 fused_ordering(1000) 00:12:47.247 fused_ordering(1001) 00:12:47.247 fused_ordering(1002) 00:12:47.247 fused_ordering(1003) 00:12:47.247 fused_ordering(1004) 00:12:47.247 fused_ordering(1005) 00:12:47.247 fused_ordering(1006) 00:12:47.247 fused_ordering(1007) 00:12:47.247 fused_ordering(1008) 00:12:47.247 fused_ordering(1009) 00:12:47.247 fused_ordering(1010) 00:12:47.247 fused_ordering(1011) 00:12:47.247 fused_ordering(1012) 00:12:47.247 fused_ordering(1013) 00:12:47.247 fused_ordering(1014) 00:12:47.247 fused_ordering(1015) 00:12:47.247 fused_ordering(1016) 00:12:47.247 fused_ordering(1017) 00:12:47.247 fused_ordering(1018) 00:12:47.247 fused_ordering(1019) 00:12:47.247 fused_ordering(1020) 00:12:47.247 fused_ordering(1021) 00:12:47.247 fused_ordering(1022) 00:12:47.247 fused_ordering(1023) 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:47.247 rmmod nvme_rdma 00:12:47.247 rmmod nvme_fabrics 00:12:47.247 11:21:15 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3509658 ']' 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3509658 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 3509658 ']' 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 3509658 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3509658 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3509658' 00:12:47.247 killing process with pid 3509658 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 3509658 00:12:47.247 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 3509658 00:12:47.509 11:21:16 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.509 11:21:16 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:47.509 00:12:47.509 real 0m9.277s 00:12:47.509 user 0m5.045s 00:12:47.509 sys 0m5.584s 00:12:47.509 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:47.509 11:21:16 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.509 ************************************ 00:12:47.509 END TEST nvmf_fused_ordering 00:12:47.509 ************************************ 00:12:47.509 11:21:16 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:47.509 11:21:16 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:47.509 11:21:16 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:47.509 11:21:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:47.509 ************************************ 00:12:47.509 START TEST nvmf_delete_subsystem 00:12:47.509 ************************************ 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:47.509 * Looking for test storage... 00:12:47.509 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.509 11:21:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:55.671 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:55.671 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:55.671 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:55.672 Found net devices under 0000:98:00.0: mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:55.672 Found net devices under 0000:98:00.1: mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:55.672 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:55.672 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:12:55.672 altname enp152s0f0np0 00:12:55.672 altname ens817f0np0 00:12:55.672 inet 192.168.100.8/24 scope global mlx_0_0 00:12:55.672 valid_lft forever preferred_lft forever 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:55.672 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:55.672 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:12:55.672 altname enp152s0f1np1 00:12:55.672 altname ens817f1np1 00:12:55.672 inet 192.168.100.9/24 scope global mlx_0_1 00:12:55.672 valid_lft forever preferred_lft forever 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:55.672 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:55.672 192.168.100.9' 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:55.673 192.168.100.9' 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:55.673 192.168.100.9' 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3513759 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3513759 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 3513759 ']' 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 11:21:23 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:55.673 [2024-06-10 11:21:23.398365] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:12:55.673 [2024-06-10 11:21:23.398430] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.673 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.673 [2024-06-10 11:21:23.462627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:55.673 [2024-06-10 11:21:23.538282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.673 [2024-06-10 11:21:23.538334] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.673 [2024-06-10 11:21:23.538342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.673 [2024-06-10 11:21:23.538348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.673 [2024-06-10 11:21:23.538354] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.673 [2024-06-10 11:21:23.538492] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.673 [2024-06-10 11:21:23.538493] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 [2024-06-10 11:21:24.235104] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb3ea20/0xb42f10) succeed. 00:12:55.673 [2024-06-10 11:21:24.248600] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb3ff20/0xb845a0) succeed. 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 [2024-06-10 11:21:24.333749] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 NULL1 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 Delay0 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3513958 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:55.673 11:21:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:55.673 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.673 [2024-06-10 11:21:24.432416] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:57.586 11:21:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.586 11:21:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.586 11:21:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:58.526 NVMe io qpair process completion error 00:12:58.526 NVMe io qpair process completion error 00:12:58.787 NVMe io qpair process completion error 00:12:58.787 NVMe io qpair process completion error 00:12:58.787 NVMe io qpair process completion error 00:12:58.787 11:21:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:58.788 NVMe io qpair process completion error 00:12:58.788 11:21:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:58.788 11:21:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3513958 00:12:58.788 11:21:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:59.359 11:21:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:59.359 11:21:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3513958 00:12:59.359 11:21:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Write completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Write completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Write completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Write completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Write completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.620 starting I/O failed: -6 00:12:59.620 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 starting I/O failed: -6 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Write completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.621 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Write completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Read completed with error (sct=0, sc=8) 00:12:59.622 Initializing NVMe Controllers 00:12:59.622 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:59.622 Controller IO queue size 128, less than required. 00:12:59.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:59.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:59.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:59.622 Initialization complete. Launching workers. 00:12:59.622 ======================================================== 00:12:59.622 Latency(us) 00:12:59.622 Device Information : IOPS MiB/s Average min max 00:12:59.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.68 0.04 1591045.41 1000068.29 2967205.89 00:12:59.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.68 0.04 1592456.58 1001282.06 2968298.89 00:12:59.622 ======================================================== 00:12:59.622 Total : 161.35 0.08 1591750.99 1000068.29 2968298.89 00:12:59.622 00:12:59.622 11:21:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:59.622 11:21:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3513958 00:12:59.622 11:21:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:59.622 [2024-06-10 11:21:28.541626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:59.622 [2024-06-10 11:21:28.541655] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:59.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3513958 00:13:00.193 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3513958) - No such process 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3513958 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 3513958 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:13:00.193 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 3513958 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:00.194 [2024-06-10 11:21:29.053348] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3514798 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:00.194 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:00.194 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.194 [2024-06-10 11:21:29.144894] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:00.797 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:00.797 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:00.797 11:21:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:01.368 11:21:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:01.368 11:21:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:01.368 11:21:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:01.628 11:21:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:01.628 11:21:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:01.628 11:21:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:02.199 11:21:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:02.199 11:21:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:02.199 11:21:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:02.768 11:21:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:02.768 11:21:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:02.768 11:21:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.341 11:21:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.341 11:21:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:03.341 11:21:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.909 11:21:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.909 11:21:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:03.909 11:21:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:04.169 11:21:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:04.169 11:21:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:04.169 11:21:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:04.739 11:21:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:04.739 11:21:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:04.739 11:21:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:05.308 11:21:34 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:05.308 11:21:34 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:05.308 11:21:34 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:05.876 11:21:34 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:05.876 11:21:34 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:05.876 11:21:34 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:06.445 11:21:35 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:06.445 11:21:35 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:06.445 11:21:35 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:06.704 11:21:35 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:06.704 11:21:35 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:06.704 11:21:35 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:07.273 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:07.273 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:07.273 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:07.533 Initializing NVMe Controllers 00:13:07.533 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:07.533 Controller IO queue size 128, less than required. 00:13:07.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:07.533 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:07.533 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:07.533 Initialization complete. Launching workers. 00:13:07.533 ======================================================== 00:13:07.533 Latency(us) 00:13:07.533 Device Information : IOPS MiB/s Average min max 00:13:07.533 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001075.71 1000043.04 1003249.90 00:13:07.533 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001721.33 1000048.79 1005760.08 00:13:07.533 ======================================================== 00:13:07.533 Total : 256.00 0.12 1001398.52 1000043.04 1005760.08 00:13:07.533 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3514798 00:13:07.793 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3514798) - No such process 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3514798 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:07.793 rmmod nvme_rdma 00:13:07.793 rmmod nvme_fabrics 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3513759 ']' 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3513759 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 3513759 ']' 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 3513759 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3513759 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3513759' 00:13:07.793 killing process with pid 3513759 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 3513759 00:13:07.793 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 3513759 00:13:08.053 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.053 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:08.053 00:13:08.053 real 0m20.599s 00:13:08.053 user 0m50.072s 00:13:08.053 sys 0m6.084s 00:13:08.053 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:08.053 11:21:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:08.053 ************************************ 00:13:08.053 END TEST nvmf_delete_subsystem 00:13:08.053 ************************************ 00:13:08.053 11:21:36 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:08.053 11:21:36 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:08.053 11:21:36 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:08.053 11:21:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:08.053 ************************************ 00:13:08.053 START TEST nvmf_ns_masking 00:13:08.053 ************************************ 00:13:08.053 11:21:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:08.313 * Looking for test storage... 00:13:08.313 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:08.313 11:21:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.313 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:08.313 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.313 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=3348e6b7-2dc7-4f43-a4b9-0ef728d96f31 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.314 11:21:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:14.898 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:14.899 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:14.899 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:14.899 Found net devices under 0000:98:00.0: mlx_0_0 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:14.899 Found net devices under 0000:98:00.1: mlx_0_1 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:14.899 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:14.899 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:13:14.899 altname enp152s0f0np0 00:13:14.899 altname ens817f0np0 00:13:14.899 inet 192.168.100.8/24 scope global mlx_0_0 00:13:14.899 valid_lft forever preferred_lft forever 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:14.899 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:14.900 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:14.900 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:13:14.900 altname enp152s0f1np1 00:13:14.900 altname ens817f1np1 00:13:14.900 inet 192.168.100.9/24 scope global mlx_0_1 00:13:14.900 valid_lft forever preferred_lft forever 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:14.900 192.168.100.9' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:14.900 192.168.100.9' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:14.900 192.168.100.9' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3520159 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3520159 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 3520159 ']' 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:14.900 11:21:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:14.900 [2024-06-10 11:21:43.815371] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:13:14.900 [2024-06-10 11:21:43.815419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.900 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.160 [2024-06-10 11:21:43.875818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.160 [2024-06-10 11:21:43.945100] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.160 [2024-06-10 11:21:43.945139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.160 [2024-06-10 11:21:43.945147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.160 [2024-06-10 11:21:43.945153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.160 [2024-06-10 11:21:43.945159] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.160 [2024-06-10 11:21:43.945298] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.160 [2024-06-10 11:21:43.945415] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.160 [2024-06-10 11:21:43.945573] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.160 [2024-06-10 11:21:43.945574] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.730 11:21:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:15.730 11:21:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:13:15.730 11:21:44 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.730 11:21:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:15.730 11:21:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.730 11:21:44 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.730 11:21:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:15.989 [2024-06-10 11:21:44.799362] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa3e0b0/0xa425a0) succeed. 00:13:15.989 [2024-06-10 11:21:44.813979] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa3f6f0/0xa83c30) succeed. 00:13:16.249 11:21:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:16.249 11:21:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:16.249 11:21:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:16.249 Malloc1 00:13:16.249 11:21:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:16.508 Malloc2 00:13:16.508 11:21:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:16.768 11:21:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:16.769 11:21:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:17.029 [2024-06-10 11:21:45.792536] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:17.029 11:21:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:13:17.029 11:21:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3348e6b7-2dc7-4f43-a4b9-0ef728d96f31 -a 192.168.100.8 -s 4420 -i 4 00:13:17.290 11:21:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.290 11:21:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:13:17.290 11:21:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.290 11:21:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:17.290 11:21:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:19.831 [ 0]:0x1 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61b19aa0f3254458ba3884cf40adab39 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61b19aa0f3254458ba3884cf40adab39 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:19.831 [ 0]:0x1 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61b19aa0f3254458ba3884cf40adab39 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61b19aa0f3254458ba3884cf40adab39 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:19.831 [ 1]:0x2 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d5629884f529492cbac45b0560366ee8 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d5629884f529492cbac45b0560366ee8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:13:19.831 11:21:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.400 11:21:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.400 11:21:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:20.660 11:21:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:13:20.660 11:21:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3348e6b7-2dc7-4f43-a4b9-0ef728d96f31 -a 192.168.100.8 -s 4420 -i 4 00:13:21.229 11:21:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:21.229 11:21:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:13:21.229 11:21:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.229 11:21:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:13:21.229 11:21:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:13:21.229 11:21:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.140 11:21:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:23.140 [ 0]:0x2 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d5629884f529492cbac45b0560366ee8 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d5629884f529492cbac45b0560366ee8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.140 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.458 [ 0]:0x1 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61b19aa0f3254458ba3884cf40adab39 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61b19aa0f3254458ba3884cf40adab39 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:23.458 [ 1]:0x2 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d5629884f529492cbac45b0560366ee8 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d5629884f529492cbac45b0560366ee8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.458 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:23.720 [ 0]:0x2 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d5629884f529492cbac45b0560366ee8 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d5629884f529492cbac45b0560366ee8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:13:23.720 11:21:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.290 11:21:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:24.290 11:21:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:13:24.290 11:21:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3348e6b7-2dc7-4f43-a4b9-0ef728d96f31 -a 192.168.100.8 -s 4420 -i 4 00:13:24.861 11:21:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:24.861 11:21:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:13:24.861 11:21:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.861 11:21:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:13:24.861 11:21:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:13:24.861 11:21:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:26.772 [ 0]:0x1 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:26.772 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61b19aa0f3254458ba3884cf40adab39 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61b19aa0f3254458ba3884cf40adab39 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:27.032 [ 1]:0x2 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d5629884f529492cbac45b0560366ee8 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d5629884f529492cbac45b0560366ee8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:27.032 11:21:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:27.291 [ 0]:0x2 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d5629884f529492cbac45b0560366ee8 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d5629884f529492cbac45b0560366ee8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:27.291 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:27.292 [2024-06-10 11:21:56.224162] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:27.292 request: 00:13:27.292 { 00:13:27.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.292 "nsid": 2, 00:13:27.292 "host": "nqn.2016-06.io.spdk:host1", 00:13:27.292 "method": "nvmf_ns_remove_host", 00:13:27.292 "req_id": 1 00:13:27.292 } 00:13:27.292 Got JSON-RPC error response 00:13:27.292 response: 00:13:27.292 { 00:13:27.292 "code": -32602, 00:13:27.292 "message": "Invalid parameters" 00:13:27.292 } 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:27.292 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:27.552 [ 0]:0x2 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d5629884f529492cbac45b0560366ee8 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d5629884f529492cbac45b0560366ee8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:13:27.552 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:28.123 rmmod nvme_rdma 00:13:28.123 rmmod nvme_fabrics 00:13:28.123 11:21:56 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3520159 ']' 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3520159 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 3520159 ']' 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 3520159 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3520159 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3520159' 00:13:28.123 killing process with pid 3520159 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 3520159 00:13:28.123 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 3520159 00:13:28.384 11:21:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.384 11:21:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:28.384 00:13:28.384 real 0m20.296s 00:13:28.384 user 0m57.939s 00:13:28.384 sys 0m6.192s 00:13:28.384 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:28.384 11:21:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:28.384 ************************************ 00:13:28.384 END TEST nvmf_ns_masking 00:13:28.384 ************************************ 00:13:28.384 11:21:57 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:28.384 11:21:57 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:28.384 11:21:57 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:28.384 11:21:57 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:28.384 11:21:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:28.645 ************************************ 00:13:28.645 START TEST nvmf_nvme_cli 00:13:28.645 ************************************ 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:28.645 * Looking for test storage... 00:13:28.645 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.645 11:21:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:36.788 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:36.788 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:36.788 Found net devices under 0000:98:00.0: mlx_0_0 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:36.788 Found net devices under 0000:98:00.1: mlx_0_1 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:36.788 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:36.788 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:13:36.788 altname enp152s0f0np0 00:13:36.788 altname ens817f0np0 00:13:36.788 inet 192.168.100.8/24 scope global mlx_0_0 00:13:36.788 valid_lft forever preferred_lft forever 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:36.788 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:36.788 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:13:36.788 altname enp152s0f1np1 00:13:36.788 altname ens817f1np1 00:13:36.788 inet 192.168.100.9/24 scope global mlx_0_1 00:13:36.788 valid_lft forever preferred_lft forever 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:36.788 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:36.789 192.168.100.9' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:36.789 192.168.100.9' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:36.789 192.168.100.9' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3526689 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3526689 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 3526689 ']' 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:36.789 11:22:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 [2024-06-10 11:22:04.545695] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:13:36.789 [2024-06-10 11:22:04.545746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.789 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.789 [2024-06-10 11:22:04.606171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.789 [2024-06-10 11:22:04.671696] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.789 [2024-06-10 11:22:04.671733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.789 [2024-06-10 11:22:04.671741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.789 [2024-06-10 11:22:04.671747] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.789 [2024-06-10 11:22:04.671753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.789 [2024-06-10 11:22:04.671823] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.789 [2024-06-10 11:22:04.671956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.789 [2024-06-10 11:22:04.672112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.789 [2024-06-10 11:22:04.672113] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 [2024-06-10 11:22:05.395321] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfa20b0/0xfa65a0) succeed. 00:13:36.789 [2024-06-10 11:22:05.409901] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfa36f0/0xfe7c30) succeed. 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 Malloc0 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 Malloc1 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 [2024-06-10 11:22:05.616645] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -a 192.168.100.8 -s 4420 00:13:36.789 00:13:36.789 Discovery Log Number of Records 2, Generation counter 2 00:13:36.789 =====Discovery Log Entry 0====== 00:13:36.789 trtype: rdma 00:13:36.789 adrfam: ipv4 00:13:36.789 subtype: current discovery subsystem 00:13:36.789 treq: not required 00:13:36.789 portid: 0 00:13:36.789 trsvcid: 4420 00:13:36.789 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:36.789 traddr: 192.168.100.8 00:13:36.789 eflags: explicit discovery connections, duplicate discovery information 00:13:36.789 rdma_prtype: not specified 00:13:36.789 rdma_qptype: connected 00:13:36.789 rdma_cms: rdma-cm 00:13:36.789 rdma_pkey: 0x0000 00:13:36.789 =====Discovery Log Entry 1====== 00:13:36.789 trtype: rdma 00:13:36.789 adrfam: ipv4 00:13:36.789 subtype: nvme subsystem 00:13:36.789 treq: not required 00:13:36.789 portid: 0 00:13:36.789 trsvcid: 4420 00:13:36.789 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:36.789 traddr: 192.168.100.8 00:13:36.789 eflags: none 00:13:36.789 rdma_prtype: not specified 00:13:36.789 rdma_qptype: connected 00:13:36.789 rdma_cms: rdma-cm 00:13:36.789 rdma_pkey: 0x0000 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:36.789 11:22:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:38.702 11:22:07 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:38.702 11:22:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:13:38.702 11:22:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.702 11:22:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:13:38.702 11:22:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:13:38.702 11:22:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:40.620 /dev/nvme0n1 ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:40.620 11:22:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.557 11:22:10 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.557 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:13:41.557 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.558 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:41.818 rmmod nvme_rdma 00:13:41.818 rmmod nvme_fabrics 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3526689 ']' 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3526689 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 3526689 ']' 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 3526689 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3526689 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3526689' 00:13:41.819 killing process with pid 3526689 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 3526689 00:13:41.819 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 3526689 00:13:42.080 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.080 11:22:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:42.080 00:13:42.080 real 0m13.496s 00:13:42.080 user 0m26.716s 00:13:42.080 sys 0m5.663s 00:13:42.080 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:42.080 11:22:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:42.080 ************************************ 00:13:42.080 END TEST nvmf_nvme_cli 00:13:42.080 ************************************ 00:13:42.080 11:22:10 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:42.080 11:22:10 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:13:42.080 11:22:10 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:42.080 11:22:10 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:42.080 11:22:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:42.080 ************************************ 00:13:42.080 START TEST nvmf_host_management 00:13:42.080 ************************************ 00:13:42.080 11:22:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:13:42.080 * Looking for test storage... 00:13:42.080 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:42.080 11:22:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.080 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.081 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.342 11:22:11 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.343 11:22:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:48.925 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:48.925 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:48.925 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:48.926 Found net devices under 0000:98:00.0: mlx_0_0 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:48.926 Found net devices under 0000:98:00.1: mlx_0_1 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:48.926 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:49.187 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:49.187 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:13:49.187 altname enp152s0f0np0 00:13:49.187 altname ens817f0np0 00:13:49.187 inet 192.168.100.8/24 scope global mlx_0_0 00:13:49.187 valid_lft forever preferred_lft forever 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:49.187 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:49.187 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:13:49.187 altname enp152s0f1np1 00:13:49.187 altname ens817f1np1 00:13:49.187 inet 192.168.100.9/24 scope global mlx_0_1 00:13:49.187 valid_lft forever preferred_lft forever 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.187 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.188 11:22:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:49.188 192.168.100.9' 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:49.188 192.168.100.9' 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:49.188 192.168.100.9' 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3531576 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3531576 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 3531576 ']' 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:49.188 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 [2024-06-10 11:22:18.112553] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:13:49.188 [2024-06-10 11:22:18.112626] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.188 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.448 [2024-06-10 11:22:18.195221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.448 [2024-06-10 11:22:18.290983] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.448 [2024-06-10 11:22:18.291041] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.448 [2024-06-10 11:22:18.291051] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.448 [2024-06-10 11:22:18.291059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.448 [2024-06-10 11:22:18.291066] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.448 [2024-06-10 11:22:18.291196] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.448 [2024-06-10 11:22:18.291354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.448 [2024-06-10 11:22:18.291484] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.448 [2024-06-10 11:22:18.291485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.017 11:22:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.017 [2024-06-10 11:22:18.972531] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1191320/0x1195810) succeed. 00:13:50.017 [2024-06-10 11:22:18.987411] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1192960/0x11d6ea0) succeed. 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.278 Malloc0 00:13:50.278 [2024-06-10 11:22:19.166525] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3531815 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3531815 /var/tmp/bdevperf.sock 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 3531815 ']' 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:50.278 { 00:13:50.278 "params": { 00:13:50.278 "name": "Nvme$subsystem", 00:13:50.278 "trtype": "$TEST_TRANSPORT", 00:13:50.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:50.278 "adrfam": "ipv4", 00:13:50.278 "trsvcid": "$NVMF_PORT", 00:13:50.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:50.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:50.278 "hdgst": ${hdgst:-false}, 00:13:50.278 "ddgst": ${ddgst:-false} 00:13:50.278 }, 00:13:50.278 "method": "bdev_nvme_attach_controller" 00:13:50.278 } 00:13:50.278 EOF 00:13:50.278 )") 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:50.278 11:22:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:50.278 "params": { 00:13:50.278 "name": "Nvme0", 00:13:50.278 "trtype": "rdma", 00:13:50.278 "traddr": "192.168.100.8", 00:13:50.278 "adrfam": "ipv4", 00:13:50.278 "trsvcid": "4420", 00:13:50.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:50.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:50.278 "hdgst": false, 00:13:50.278 "ddgst": false 00:13:50.278 }, 00:13:50.278 "method": "bdev_nvme_attach_controller" 00:13:50.278 }' 00:13:50.556 [2024-06-10 11:22:19.264939] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:13:50.556 [2024-06-10 11:22:19.264990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531815 ] 00:13:50.556 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.556 [2024-06-10 11:22:19.324234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.556 [2024-06-10 11:22:19.388309] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.864 Running I/O for 10 seconds... 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:51.126 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1264 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1264 -ge 100 ']' 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:51.399 11:22:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:52.342 [2024-06-10 11:22:21.147328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:13:52.342 [2024-06-10 11:22:21.147366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:13:52.343 [2024-06-10 11:22:21.147578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:13:52.343 [2024-06-10 11:22:21.147600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:13:52.343 [2024-06-10 11:22:21.147616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:13:52.343 [2024-06-10 11:22:21.147633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:13:52.343 [2024-06-10 11:22:21.147650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:13:52.343 [2024-06-10 11:22:21.147666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:13:52.343 [2024-06-10 11:22:21.147682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b79c000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77b000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b739000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b718000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6d6000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b5000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b694000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b673000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x182400 00:13:52.343 [2024-06-10 11:22:21.147936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.343 [2024-06-10 11:22:21.147945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.147952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.147961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.147968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.147979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.147986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.147996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bffd000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bef5000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bed4000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be92000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be71000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbdd000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbbc000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9b000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb17000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.148418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba72000 len:0x10000 key:0x182400 00:13:52.344 [2024-06-10 11:22:21.148425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.344 [2024-06-10 11:22:21.150630] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:13:52.344 [2024-06-10 11:22:21.151855] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:52.344 task offset: 46720 on job bdev=Nvme0n1 fails 00:13:52.344 00:13:52.344 Latency(us) 00:13:52.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.344 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:52.344 Job: Nvme0n1 ended in about 1.58 seconds with error 00:13:52.344 Verification LBA range: start 0x0 length 0x400 00:13:52.344 Nvme0n1 : 1.58 851.01 53.19 40.52 0.00 70948.52 2457.60 1013623.47 00:13:52.344 =================================================================================================================== 00:13:52.344 Total : 851.01 53.19 40.52 0.00 70948.52 2457.60 1013623.47 00:13:52.344 [2024-06-10 11:22:21.153858] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.344 11:22:21 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3531815 00:13:52.344 11:22:21 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:52.344 11:22:21 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:52.344 11:22:21 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:52.344 11:22:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:52.344 11:22:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.344 11:22:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.345 11:22:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.345 { 00:13:52.345 "params": { 00:13:52.345 "name": "Nvme$subsystem", 00:13:52.345 "trtype": "$TEST_TRANSPORT", 00:13:52.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.345 "adrfam": "ipv4", 00:13:52.345 "trsvcid": "$NVMF_PORT", 00:13:52.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.345 "hdgst": ${hdgst:-false}, 00:13:52.345 "ddgst": ${ddgst:-false} 00:13:52.345 }, 00:13:52.345 "method": "bdev_nvme_attach_controller" 00:13:52.345 } 00:13:52.345 EOF 00:13:52.345 )") 00:13:52.345 11:22:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:52.345 11:22:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:52.345 11:22:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:52.345 11:22:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.345 "params": { 00:13:52.345 "name": "Nvme0", 00:13:52.345 "trtype": "rdma", 00:13:52.345 "traddr": "192.168.100.8", 00:13:52.345 "adrfam": "ipv4", 00:13:52.345 "trsvcid": "4420", 00:13:52.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:52.345 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:52.345 "hdgst": false, 00:13:52.345 "ddgst": false 00:13:52.345 }, 00:13:52.345 "method": "bdev_nvme_attach_controller" 00:13:52.345 }' 00:13:52.345 [2024-06-10 11:22:21.209211] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:13:52.345 [2024-06-10 11:22:21.209262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532171 ] 00:13:52.345 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.345 [2024-06-10 11:22:21.268323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.606 [2024-06-10 11:22:21.332706] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.606 Running I/O for 1 seconds... 00:13:53.990 00:13:53.990 Latency(us) 00:13:53.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.991 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:53.991 Verification LBA range: start 0x0 length 0x400 00:13:53.991 Nvme0n1 : 1.02 2495.19 155.95 0.00 0.00 25080.36 1460.91 47404.37 00:13:53.991 =================================================================================================================== 00:13:53.991 Total : 2495.19 155.95 0.00 0.00 25080.36 1460.91 47404.37 00:13:53.991 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3531815 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:53.991 rmmod nvme_rdma 00:13:53.991 rmmod nvme_fabrics 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3531576 ']' 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3531576 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 3531576 ']' 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 3531576 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3531576 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3531576' 00:13:53.991 killing process with pid 3531576 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 3531576 00:13:53.991 11:22:22 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 3531576 00:13:54.252 [2024-06-10 11:22:22.981918] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:54.252 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.252 11:22:22 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:54.252 11:22:22 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:54.252 00:13:54.252 real 0m12.058s 00:13:54.252 user 0m24.306s 00:13:54.252 sys 0m5.969s 00:13:54.252 11:22:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:54.252 11:22:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.252 ************************************ 00:13:54.252 END TEST nvmf_host_management 00:13:54.252 ************************************ 00:13:54.252 11:22:23 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:54.252 11:22:23 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:54.252 11:22:23 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:54.252 11:22:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:54.252 ************************************ 00:13:54.252 START TEST nvmf_lvol 00:13:54.252 ************************************ 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:54.252 * Looking for test storage... 00:13:54.252 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:54.252 11:22:23 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:54.253 11:22:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:00.838 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.838 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:00.838 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:01.099 Found net devices under 0000:98:00.0: mlx_0_0 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:01.099 Found net devices under 0000:98:00.1: mlx_0_1 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:01.099 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.099 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:14:01.099 altname enp152s0f0np0 00:14:01.099 altname ens817f0np0 00:14:01.099 inet 192.168.100.8/24 scope global mlx_0_0 00:14:01.099 valid_lft forever preferred_lft forever 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:01.099 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.099 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:14:01.099 altname enp152s0f1np1 00:14:01.099 altname ens817f1np1 00:14:01.099 inet 192.168.100.9/24 scope global mlx_0_1 00:14:01.099 valid_lft forever preferred_lft forever 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.099 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.100 11:22:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:01.100 192.168.100.9' 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:01.100 192.168.100.9' 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:01.100 192.168.100.9' 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3536222 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3536222 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 3536222 ']' 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:01.100 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:01.360 [2024-06-10 11:22:30.110779] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:14:01.360 [2024-06-10 11:22:30.110845] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.360 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.360 [2024-06-10 11:22:30.178956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.360 [2024-06-10 11:22:30.253517] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.360 [2024-06-10 11:22:30.253557] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.360 [2024-06-10 11:22:30.253564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.360 [2024-06-10 11:22:30.253571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.360 [2024-06-10 11:22:30.253576] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.360 [2024-06-10 11:22:30.253713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.360 [2024-06-10 11:22:30.253843] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.360 [2024-06-10 11:22:30.254016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.929 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:01.929 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:14:01.929 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.929 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:01.929 11:22:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:02.188 11:22:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.188 11:22:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:02.188 [2024-06-10 11:22:31.104515] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14ac5d0/0x14b0ac0) succeed. 00:14:02.188 [2024-06-10 11:22:31.118470] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14adb70/0x14f2150) succeed. 00:14:02.448 11:22:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:02.707 11:22:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:02.707 11:22:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:02.707 11:22:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:02.707 11:22:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:02.966 11:22:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:03.228 11:22:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7d1672fe-95c2-4884-8b90-54d2ef98a4ef 00:14:03.228 11:22:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7d1672fe-95c2-4884-8b90-54d2ef98a4ef lvol 20 00:14:03.228 11:22:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=250c248b-4b69-4099-b11c-1868cb681ce7 00:14:03.228 11:22:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:03.488 11:22:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 250c248b-4b69-4099-b11c-1868cb681ce7 00:14:03.488 11:22:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:03.748 [2024-06-10 11:22:32.598460] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:03.748 11:22:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:04.008 11:22:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3536896 00:14:04.008 11:22:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:04.008 11:22:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:04.008 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.948 11:22:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 250c248b-4b69-4099-b11c-1868cb681ce7 MY_SNAPSHOT 00:14:05.208 11:22:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=35c10357-16fb-4dce-b85c-377795a2302b 00:14:05.208 11:22:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 250c248b-4b69-4099-b11c-1868cb681ce7 30 00:14:05.208 11:22:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 35c10357-16fb-4dce-b85c-377795a2302b MY_CLONE 00:14:05.468 11:22:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4cc20b99-6274-427c-8fb5-eb08331d986f 00:14:05.468 11:22:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4cc20b99-6274-427c-8fb5-eb08331d986f 00:14:05.728 11:22:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3536896 00:14:15.743 Initializing NVMe Controllers 00:14:15.743 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:14:15.743 Controller IO queue size 128, less than required. 00:14:15.743 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:15.743 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:15.743 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:15.743 Initialization complete. Launching workers. 00:14:15.743 ======================================================== 00:14:15.743 Latency(us) 00:14:15.743 Device Information : IOPS MiB/s Average min max 00:14:15.743 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 22923.90 89.55 5584.65 2205.78 30294.17 00:14:15.743 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 22993.20 89.82 5567.45 2620.93 35362.52 00:14:15.743 ======================================================== 00:14:15.743 Total : 45917.10 179.36 5576.04 2205.78 35362.52 00:14:15.743 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 250c248b-4b69-4099-b11c-1868cb681ce7 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d1672fe-95c2-4884-8b90-54d2ef98a4ef 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:15.743 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:15.744 rmmod nvme_rdma 00:14:15.744 rmmod nvme_fabrics 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3536222 ']' 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3536222 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 3536222 ']' 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 3536222 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:15.744 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3536222 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3536222' 00:14:16.004 killing process with pid 3536222 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 3536222 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 3536222 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:16.004 00:14:16.004 real 0m21.894s 00:14:16.004 user 1m10.461s 00:14:16.004 sys 0m6.066s 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:16.004 11:22:44 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:16.004 ************************************ 00:14:16.004 END TEST nvmf_lvol 00:14:16.004 ************************************ 00:14:16.265 11:22:45 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:14:16.265 11:22:45 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:16.265 11:22:45 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:16.265 11:22:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:16.265 ************************************ 00:14:16.265 START TEST nvmf_lvs_grow 00:14:16.265 ************************************ 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:14:16.265 * Looking for test storage... 00:14:16.265 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.265 11:22:45 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.266 11:22:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:24.461 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:24.461 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:24.461 Found net devices under 0000:98:00.0: mlx_0_0 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:24.461 Found net devices under 0000:98:00.1: mlx_0_1 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:24.461 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:24.461 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:14:24.461 altname enp152s0f0np0 00:14:24.461 altname ens817f0np0 00:14:24.461 inet 192.168.100.8/24 scope global mlx_0_0 00:14:24.461 valid_lft forever preferred_lft forever 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:24.461 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:24.461 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:14:24.461 altname enp152s0f1np1 00:14:24.461 altname ens817f1np1 00:14:24.461 inet 192.168.100.9/24 scope global mlx_0_1 00:14:24.461 valid_lft forever preferred_lft forever 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:24.461 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:24.462 192.168.100.9' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:24.462 192.168.100.9' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:24.462 192.168.100.9' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3542933 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3542933 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 3542933 ']' 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:24.462 11:22:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:24.462 [2024-06-10 11:22:52.311729] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:14:24.462 [2024-06-10 11:22:52.311794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.462 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.462 [2024-06-10 11:22:52.376236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.462 [2024-06-10 11:22:52.449933] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.462 [2024-06-10 11:22:52.449969] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.462 [2024-06-10 11:22:52.449976] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.462 [2024-06-10 11:22:52.449983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.462 [2024-06-10 11:22:52.449988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.462 [2024-06-10 11:22:52.450013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:24.462 [2024-06-10 11:22:53.287160] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1847f00/0x184c3f0) succeed. 00:14:24.462 [2024-06-10 11:22:53.300492] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1849400/0x188da80) succeed. 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:24.462 ************************************ 00:14:24.462 START TEST lvs_grow_clean 00:14:24.462 ************************************ 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:24.462 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:24.723 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:24.723 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:24.984 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:24.984 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:24.984 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:24.984 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:24.984 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:24.984 11:22:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a lvol 150 00:14:25.245 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9 00:14:25.245 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:25.245 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:25.506 [2024-06-10 11:22:54.237928] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:25.506 [2024-06-10 11:22:54.237977] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:25.506 true 00:14:25.506 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:25.506 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:25.506 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:25.506 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:25.766 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9 00:14:25.766 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:26.026 [2024-06-10 11:22:54.828167] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3543640 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3543640 /var/tmp/bdevperf.sock 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 3543640 ']' 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:26.026 11:22:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:26.286 [2024-06-10 11:22:55.040789] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:14:26.286 [2024-06-10 11:22:55.040837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543640 ] 00:14:26.286 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.286 [2024-06-10 11:22:55.116743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.286 [2024-06-10 11:22:55.181045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.857 11:22:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:26.857 11:22:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:14:26.857 11:22:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:27.117 Nvme0n1 00:14:27.117 11:22:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:27.378 [ 00:14:27.378 { 00:14:27.378 "name": "Nvme0n1", 00:14:27.378 "aliases": [ 00:14:27.378 "c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9" 00:14:27.378 ], 00:14:27.378 "product_name": "NVMe disk", 00:14:27.378 "block_size": 4096, 00:14:27.378 "num_blocks": 38912, 00:14:27.378 "uuid": "c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9", 00:14:27.378 "assigned_rate_limits": { 00:14:27.378 "rw_ios_per_sec": 0, 00:14:27.378 "rw_mbytes_per_sec": 0, 00:14:27.378 "r_mbytes_per_sec": 0, 00:14:27.378 "w_mbytes_per_sec": 0 00:14:27.378 }, 00:14:27.378 "claimed": false, 00:14:27.378 "zoned": false, 00:14:27.378 "supported_io_types": { 00:14:27.378 "read": true, 00:14:27.378 "write": true, 00:14:27.378 "unmap": true, 00:14:27.378 "write_zeroes": true, 00:14:27.378 "flush": true, 00:14:27.378 "reset": true, 00:14:27.378 "compare": true, 00:14:27.378 "compare_and_write": true, 00:14:27.378 "abort": true, 00:14:27.378 "nvme_admin": true, 00:14:27.378 "nvme_io": true 00:14:27.378 }, 00:14:27.378 "memory_domains": [ 00:14:27.378 { 00:14:27.378 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:14:27.378 "dma_device_type": 0 00:14:27.378 } 00:14:27.378 ], 00:14:27.378 "driver_specific": { 00:14:27.378 "nvme": [ 00:14:27.378 { 00:14:27.378 "trid": { 00:14:27.378 "trtype": "RDMA", 00:14:27.378 "adrfam": "IPv4", 00:14:27.378 "traddr": "192.168.100.8", 00:14:27.378 "trsvcid": "4420", 00:14:27.378 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:27.378 }, 00:14:27.378 "ctrlr_data": { 00:14:27.378 "cntlid": 1, 00:14:27.378 "vendor_id": "0x8086", 00:14:27.378 "model_number": "SPDK bdev Controller", 00:14:27.378 "serial_number": "SPDK0", 00:14:27.378 "firmware_revision": "24.09", 00:14:27.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:27.378 "oacs": { 00:14:27.378 "security": 0, 00:14:27.378 "format": 0, 00:14:27.378 "firmware": 0, 00:14:27.378 "ns_manage": 0 00:14:27.378 }, 00:14:27.378 "multi_ctrlr": true, 00:14:27.378 "ana_reporting": false 00:14:27.378 }, 00:14:27.378 "vs": { 00:14:27.378 "nvme_version": "1.3" 00:14:27.378 }, 00:14:27.378 "ns_data": { 00:14:27.378 "id": 1, 00:14:27.378 "can_share": true 00:14:27.378 } 00:14:27.378 } 00:14:27.378 ], 00:14:27.378 "mp_policy": "active_passive" 00:14:27.378 } 00:14:27.378 } 00:14:27.378 ] 00:14:27.378 11:22:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3543846 00:14:27.378 11:22:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:27.378 11:22:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:27.378 Running I/O for 10 seconds... 00:14:28.764 Latency(us) 00:14:28.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.764 Nvme0n1 : 1.00 25857.00 101.00 0.00 0.00 0.00 0.00 0.00 00:14:28.764 =================================================================================================================== 00:14:28.764 Total : 25857.00 101.00 0.00 0.00 0.00 0.00 0.00 00:14:28.764 00:14:29.334 11:22:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:29.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.595 Nvme0n1 : 2.00 26114.50 102.01 0.00 0.00 0.00 0.00 0.00 00:14:29.595 =================================================================================================================== 00:14:29.595 Total : 26114.50 102.01 0.00 0.00 0.00 0.00 0.00 00:14:29.595 00:14:29.595 true 00:14:29.595 11:22:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:29.595 11:22:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:29.595 11:22:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:29.595 11:22:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:29.595 11:22:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3543846 00:14:30.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.537 Nvme0n1 : 3.00 26230.67 102.46 0.00 0.00 0.00 0.00 0.00 00:14:30.537 =================================================================================================================== 00:14:30.537 Total : 26230.67 102.46 0.00 0.00 0.00 0.00 0.00 00:14:30.537 00:14:31.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.480 Nvme0n1 : 4.00 26289.00 102.69 0.00 0.00 0.00 0.00 0.00 00:14:31.480 =================================================================================================================== 00:14:31.480 Total : 26289.00 102.69 0.00 0.00 0.00 0.00 0.00 00:14:31.480 00:14:32.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.421 Nvme0n1 : 5.00 26336.40 102.88 0.00 0.00 0.00 0.00 0.00 00:14:32.421 =================================================================================================================== 00:14:32.421 Total : 26336.40 102.88 0.00 0.00 0.00 0.00 0.00 00:14:32.421 00:14:33.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.366 Nvme0n1 : 6.00 26378.83 103.04 0.00 0.00 0.00 0.00 0.00 00:14:33.367 =================================================================================================================== 00:14:33.367 Total : 26378.83 103.04 0.00 0.00 0.00 0.00 0.00 00:14:33.367 00:14:34.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.752 Nvme0n1 : 7.00 26405.00 103.14 0.00 0.00 0.00 0.00 0.00 00:14:34.752 =================================================================================================================== 00:14:34.752 Total : 26405.00 103.14 0.00 0.00 0.00 0.00 0.00 00:14:34.752 00:14:35.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.694 Nvme0n1 : 8.00 26432.00 103.25 0.00 0.00 0.00 0.00 0.00 00:14:35.694 =================================================================================================================== 00:14:35.694 Total : 26432.00 103.25 0.00 0.00 0.00 0.00 0.00 00:14:35.694 00:14:36.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.636 Nvme0n1 : 9.00 26449.89 103.32 0.00 0.00 0.00 0.00 0.00 00:14:36.636 =================================================================================================================== 00:14:36.636 Total : 26449.89 103.32 0.00 0.00 0.00 0.00 0.00 00:14:36.636 00:14:37.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.577 Nvme0n1 : 10.00 26467.10 103.39 0.00 0.00 0.00 0.00 0.00 00:14:37.577 =================================================================================================================== 00:14:37.577 Total : 26467.10 103.39 0.00 0.00 0.00 0.00 0.00 00:14:37.577 00:14:37.577 00:14:37.577 Latency(us) 00:14:37.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.577 Nvme0n1 : 10.01 26467.51 103.39 0.00 0.00 4832.91 3345.07 18896.21 00:14:37.577 =================================================================================================================== 00:14:37.577 Total : 26467.51 103.39 0.00 0.00 4832.91 3345.07 18896.21 00:14:37.577 0 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3543640 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 3543640 ']' 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 3543640 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3543640 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3543640' 00:14:37.577 killing process with pid 3543640 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 3543640 00:14:37.577 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.577 00:14:37.577 Latency(us) 00:14:37.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.577 =================================================================================================================== 00:14:37.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 3543640 00:14:37.577 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:37.837 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:38.097 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:38.097 11:23:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:38.097 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:38.097 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:38.097 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:38.357 [2024-06-10 11:23:07.145557] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:38.357 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:38.618 request: 00:14:38.618 { 00:14:38.618 "uuid": "fdb8ccec-cc2b-445d-88d7-2bdfba3a590a", 00:14:38.618 "method": "bdev_lvol_get_lvstores", 00:14:38.618 "req_id": 1 00:14:38.618 } 00:14:38.618 Got JSON-RPC error response 00:14:38.618 response: 00:14:38.618 { 00:14:38.618 "code": -19, 00:14:38.618 "message": "No such device" 00:14:38.618 } 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:38.618 aio_bdev 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:38.618 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:38.879 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9 -t 2000 00:14:38.879 [ 00:14:38.879 { 00:14:38.879 "name": "c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9", 00:14:38.879 "aliases": [ 00:14:38.879 "lvs/lvol" 00:14:38.879 ], 00:14:38.879 "product_name": "Logical Volume", 00:14:38.879 "block_size": 4096, 00:14:38.879 "num_blocks": 38912, 00:14:38.879 "uuid": "c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9", 00:14:38.879 "assigned_rate_limits": { 00:14:38.879 "rw_ios_per_sec": 0, 00:14:38.879 "rw_mbytes_per_sec": 0, 00:14:38.879 "r_mbytes_per_sec": 0, 00:14:38.879 "w_mbytes_per_sec": 0 00:14:38.879 }, 00:14:38.879 "claimed": false, 00:14:38.879 "zoned": false, 00:14:38.879 "supported_io_types": { 00:14:38.879 "read": true, 00:14:38.879 "write": true, 00:14:38.879 "unmap": true, 00:14:38.879 "write_zeroes": true, 00:14:38.879 "flush": false, 00:14:38.879 "reset": true, 00:14:38.879 "compare": false, 00:14:38.879 "compare_and_write": false, 00:14:38.879 "abort": false, 00:14:38.879 "nvme_admin": false, 00:14:38.879 "nvme_io": false 00:14:38.879 }, 00:14:38.879 "driver_specific": { 00:14:38.879 "lvol": { 00:14:38.879 "lvol_store_uuid": "fdb8ccec-cc2b-445d-88d7-2bdfba3a590a", 00:14:38.879 "base_bdev": "aio_bdev", 00:14:38.879 "thin_provision": false, 00:14:38.879 "num_allocated_clusters": 38, 00:14:38.879 "snapshot": false, 00:14:38.879 "clone": false, 00:14:38.879 "esnap_clone": false 00:14:38.879 } 00:14:38.879 } 00:14:38.879 } 00:14:38.879 ] 00:14:38.879 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:14:38.879 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:38.879 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:39.140 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:39.140 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:39.140 11:23:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:39.400 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:39.400 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c8d3ff03-c20e-4f5e-8c54-c8ca71a11ac9 00:14:39.400 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fdb8ccec-cc2b-445d-88d7-2bdfba3a590a 00:14:39.661 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:39.661 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:39.922 00:14:39.922 real 0m15.228s 00:14:39.922 user 0m15.218s 00:14:39.922 sys 0m0.992s 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:39.922 ************************************ 00:14:39.922 END TEST lvs_grow_clean 00:14:39.922 ************************************ 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:39.922 ************************************ 00:14:39.922 START TEST lvs_grow_dirty 00:14:39.922 ************************************ 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:39.922 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:40.183 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:40.183 11:23:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:40.183 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:40.183 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:40.183 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:40.470 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:40.470 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:40.470 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe lvol 150 00:14:40.470 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f8278253-241e-4a6c-9269-2945fc93a0e7 00:14:40.470 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:40.470 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:40.732 [2024-06-10 11:23:09.525718] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:40.732 [2024-06-10 11:23:09.525773] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:40.732 true 00:14:40.732 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:40.732 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:40.732 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:40.732 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:40.992 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f8278253-241e-4a6c-9269-2945fc93a0e7 00:14:41.252 11:23:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:41.252 [2024-06-10 11:23:10.132020] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:41.252 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3546718 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3546718 /var/tmp/bdevperf.sock 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 3546718 ']' 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.512 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:41.513 11:23:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:41.513 [2024-06-10 11:23:10.332458] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:14:41.513 [2024-06-10 11:23:10.332505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546718 ] 00:14:41.513 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.513 [2024-06-10 11:23:10.409019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.513 [2024-06-10 11:23:10.473304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.493 11:23:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:42.493 11:23:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:14:42.493 11:23:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:42.493 Nvme0n1 00:14:42.493 11:23:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:42.753 [ 00:14:42.753 { 00:14:42.753 "name": "Nvme0n1", 00:14:42.753 "aliases": [ 00:14:42.753 "f8278253-241e-4a6c-9269-2945fc93a0e7" 00:14:42.753 ], 00:14:42.753 "product_name": "NVMe disk", 00:14:42.753 "block_size": 4096, 00:14:42.753 "num_blocks": 38912, 00:14:42.753 "uuid": "f8278253-241e-4a6c-9269-2945fc93a0e7", 00:14:42.753 "assigned_rate_limits": { 00:14:42.753 "rw_ios_per_sec": 0, 00:14:42.753 "rw_mbytes_per_sec": 0, 00:14:42.753 "r_mbytes_per_sec": 0, 00:14:42.753 "w_mbytes_per_sec": 0 00:14:42.753 }, 00:14:42.753 "claimed": false, 00:14:42.753 "zoned": false, 00:14:42.753 "supported_io_types": { 00:14:42.753 "read": true, 00:14:42.753 "write": true, 00:14:42.753 "unmap": true, 00:14:42.753 "write_zeroes": true, 00:14:42.753 "flush": true, 00:14:42.753 "reset": true, 00:14:42.753 "compare": true, 00:14:42.753 "compare_and_write": true, 00:14:42.753 "abort": true, 00:14:42.753 "nvme_admin": true, 00:14:42.754 "nvme_io": true 00:14:42.754 }, 00:14:42.754 "memory_domains": [ 00:14:42.754 { 00:14:42.754 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:14:42.754 "dma_device_type": 0 00:14:42.754 } 00:14:42.754 ], 00:14:42.754 "driver_specific": { 00:14:42.754 "nvme": [ 00:14:42.754 { 00:14:42.754 "trid": { 00:14:42.754 "trtype": "RDMA", 00:14:42.754 "adrfam": "IPv4", 00:14:42.754 "traddr": "192.168.100.8", 00:14:42.754 "trsvcid": "4420", 00:14:42.754 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:42.754 }, 00:14:42.754 "ctrlr_data": { 00:14:42.754 "cntlid": 1, 00:14:42.754 "vendor_id": "0x8086", 00:14:42.754 "model_number": "SPDK bdev Controller", 00:14:42.754 "serial_number": "SPDK0", 00:14:42.754 "firmware_revision": "24.09", 00:14:42.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:42.754 "oacs": { 00:14:42.754 "security": 0, 00:14:42.754 "format": 0, 00:14:42.754 "firmware": 0, 00:14:42.754 "ns_manage": 0 00:14:42.754 }, 00:14:42.754 "multi_ctrlr": true, 00:14:42.754 "ana_reporting": false 00:14:42.754 }, 00:14:42.754 "vs": { 00:14:42.754 "nvme_version": "1.3" 00:14:42.754 }, 00:14:42.754 "ns_data": { 00:14:42.754 "id": 1, 00:14:42.754 "can_share": true 00:14:42.754 } 00:14:42.754 } 00:14:42.754 ], 00:14:42.754 "mp_policy": "active_passive" 00:14:42.754 } 00:14:42.754 } 00:14:42.754 ] 00:14:42.754 11:23:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3546905 00:14:42.754 11:23:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:42.754 11:23:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:42.754 Running I/O for 10 seconds... 00:14:43.695 Latency(us) 00:14:43.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.695 Nvme0n1 : 1.00 25823.00 100.87 0.00 0.00 0.00 0.00 0.00 00:14:43.695 =================================================================================================================== 00:14:43.695 Total : 25823.00 100.87 0.00 0.00 0.00 0.00 0.00 00:14:43.695 00:14:44.635 11:23:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:44.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.635 Nvme0n1 : 2.00 26095.50 101.94 0.00 0.00 0.00 0.00 0.00 00:14:44.635 =================================================================================================================== 00:14:44.635 Total : 26095.50 101.94 0.00 0.00 0.00 0.00 0.00 00:14:44.635 00:14:44.896 true 00:14:44.896 11:23:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:44.896 11:23:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:44.896 11:23:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:44.896 11:23:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:44.896 11:23:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3546905 00:14:45.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.838 Nvme0n1 : 3.00 26208.33 102.38 0.00 0.00 0.00 0.00 0.00 00:14:45.838 =================================================================================================================== 00:14:45.839 Total : 26208.33 102.38 0.00 0.00 0.00 0.00 0.00 00:14:45.839 00:14:46.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.780 Nvme0n1 : 4.00 26280.75 102.66 0.00 0.00 0.00 0.00 0.00 00:14:46.781 =================================================================================================================== 00:14:46.781 Total : 26280.75 102.66 0.00 0.00 0.00 0.00 0.00 00:14:46.781 00:14:47.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.723 Nvme0n1 : 5.00 26335.60 102.87 0.00 0.00 0.00 0.00 0.00 00:14:47.723 =================================================================================================================== 00:14:47.723 Total : 26335.60 102.87 0.00 0.00 0.00 0.00 0.00 00:14:47.723 00:14:48.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.666 Nvme0n1 : 6.00 26372.83 103.02 0.00 0.00 0.00 0.00 0.00 00:14:48.666 =================================================================================================================== 00:14:48.666 Total : 26372.83 103.02 0.00 0.00 0.00 0.00 0.00 00:14:48.666 00:14:50.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.050 Nvme0n1 : 7.00 26400.14 103.13 0.00 0.00 0.00 0.00 0.00 00:14:50.050 =================================================================================================================== 00:14:50.050 Total : 26400.14 103.13 0.00 0.00 0.00 0.00 0.00 00:14:50.050 00:14:50.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.989 Nvme0n1 : 8.00 26427.75 103.23 0.00 0.00 0.00 0.00 0.00 00:14:50.989 =================================================================================================================== 00:14:50.989 Total : 26427.75 103.23 0.00 0.00 0.00 0.00 0.00 00:14:50.989 00:14:51.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.928 Nvme0n1 : 9.00 26435.44 103.26 0.00 0.00 0.00 0.00 0.00 00:14:51.928 =================================================================================================================== 00:14:51.928 Total : 26435.44 103.26 0.00 0.00 0.00 0.00 0.00 00:14:51.928 00:14:52.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.865 Nvme0n1 : 10.00 26451.30 103.33 0.00 0.00 0.00 0.00 0.00 00:14:52.865 =================================================================================================================== 00:14:52.865 Total : 26451.30 103.33 0.00 0.00 0.00 0.00 0.00 00:14:52.865 00:14:52.865 00:14:52.865 Latency(us) 00:14:52.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.865 Nvme0n1 : 10.00 26452.28 103.33 0.00 0.00 4835.17 3659.09 19333.12 00:14:52.865 =================================================================================================================== 00:14:52.865 Total : 26452.28 103.33 0.00 0.00 4835.17 3659.09 19333.12 00:14:52.865 0 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3546718 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 3546718 ']' 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 3546718 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3546718 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3546718' 00:14:52.865 killing process with pid 3546718 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 3546718 00:14:52.865 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.865 00:14:52.865 Latency(us) 00:14:52.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.865 =================================================================================================================== 00:14:52.865 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 3546718 00:14:52.865 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:53.125 11:23:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3542933 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3542933 00:14:53.385 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3542933 Killed "${NVMF_APP[@]}" "$@" 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:53.385 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3549082 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3549082 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 3549082 ']' 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:53.645 11:23:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 [2024-06-10 11:23:22.417928] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:14:53.645 [2024-06-10 11:23:22.417989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.645 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.645 [2024-06-10 11:23:22.478588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.645 [2024-06-10 11:23:22.542633] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.645 [2024-06-10 11:23:22.542668] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.645 [2024-06-10 11:23:22.542676] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.645 [2024-06-10 11:23:22.542682] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.645 [2024-06-10 11:23:22.542687] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.645 [2024-06-10 11:23:22.542705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.213 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:54.213 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:14:54.213 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.213 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:54.213 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:54.473 [2024-06-10 11:23:23.351597] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:54.473 [2024-06-10 11:23:23.351684] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:54.473 [2024-06-10 11:23:23.351712] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f8278253-241e-4a6c-9269-2945fc93a0e7 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=f8278253-241e-4a6c-9269-2945fc93a0e7 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:54.473 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:54.733 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f8278253-241e-4a6c-9269-2945fc93a0e7 -t 2000 00:14:54.733 [ 00:14:54.733 { 00:14:54.733 "name": "f8278253-241e-4a6c-9269-2945fc93a0e7", 00:14:54.733 "aliases": [ 00:14:54.733 "lvs/lvol" 00:14:54.733 ], 00:14:54.733 "product_name": "Logical Volume", 00:14:54.733 "block_size": 4096, 00:14:54.733 "num_blocks": 38912, 00:14:54.733 "uuid": "f8278253-241e-4a6c-9269-2945fc93a0e7", 00:14:54.733 "assigned_rate_limits": { 00:14:54.733 "rw_ios_per_sec": 0, 00:14:54.733 "rw_mbytes_per_sec": 0, 00:14:54.733 "r_mbytes_per_sec": 0, 00:14:54.733 "w_mbytes_per_sec": 0 00:14:54.733 }, 00:14:54.733 "claimed": false, 00:14:54.733 "zoned": false, 00:14:54.733 "supported_io_types": { 00:14:54.733 "read": true, 00:14:54.733 "write": true, 00:14:54.733 "unmap": true, 00:14:54.733 "write_zeroes": true, 00:14:54.733 "flush": false, 00:14:54.733 "reset": true, 00:14:54.733 "compare": false, 00:14:54.733 "compare_and_write": false, 00:14:54.733 "abort": false, 00:14:54.733 "nvme_admin": false, 00:14:54.733 "nvme_io": false 00:14:54.733 }, 00:14:54.733 "driver_specific": { 00:14:54.733 "lvol": { 00:14:54.733 "lvol_store_uuid": "6bcfee8f-0020-4b92-a876-3a5e9bbf58fe", 00:14:54.733 "base_bdev": "aio_bdev", 00:14:54.733 "thin_provision": false, 00:14:54.733 "num_allocated_clusters": 38, 00:14:54.733 "snapshot": false, 00:14:54.733 "clone": false, 00:14:54.733 "esnap_clone": false 00:14:54.733 } 00:14:54.733 } 00:14:54.733 } 00:14:54.733 ] 00:14:54.733 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:14:54.733 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:54.733 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:54.992 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:54.992 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:54.992 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:55.252 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:55.252 11:23:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:55.252 [2024-06-10 11:23:24.131580] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:55.252 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:55.513 request: 00:14:55.513 { 00:14:55.513 "uuid": "6bcfee8f-0020-4b92-a876-3a5e9bbf58fe", 00:14:55.513 "method": "bdev_lvol_get_lvstores", 00:14:55.513 "req_id": 1 00:14:55.513 } 00:14:55.513 Got JSON-RPC error response 00:14:55.513 response: 00:14:55.513 { 00:14:55.513 "code": -19, 00:14:55.513 "message": "No such device" 00:14:55.513 } 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:55.513 aio_bdev 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f8278253-241e-4a6c-9269-2945fc93a0e7 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=f8278253-241e-4a6c-9269-2945fc93a0e7 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:55.513 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:55.774 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f8278253-241e-4a6c-9269-2945fc93a0e7 -t 2000 00:14:55.774 [ 00:14:55.774 { 00:14:55.774 "name": "f8278253-241e-4a6c-9269-2945fc93a0e7", 00:14:55.774 "aliases": [ 00:14:55.774 "lvs/lvol" 00:14:55.774 ], 00:14:55.774 "product_name": "Logical Volume", 00:14:55.774 "block_size": 4096, 00:14:55.774 "num_blocks": 38912, 00:14:55.774 "uuid": "f8278253-241e-4a6c-9269-2945fc93a0e7", 00:14:55.774 "assigned_rate_limits": { 00:14:55.774 "rw_ios_per_sec": 0, 00:14:55.774 "rw_mbytes_per_sec": 0, 00:14:55.774 "r_mbytes_per_sec": 0, 00:14:55.774 "w_mbytes_per_sec": 0 00:14:55.774 }, 00:14:55.774 "claimed": false, 00:14:55.774 "zoned": false, 00:14:55.774 "supported_io_types": { 00:14:55.774 "read": true, 00:14:55.774 "write": true, 00:14:55.774 "unmap": true, 00:14:55.774 "write_zeroes": true, 00:14:55.774 "flush": false, 00:14:55.774 "reset": true, 00:14:55.774 "compare": false, 00:14:55.774 "compare_and_write": false, 00:14:55.774 "abort": false, 00:14:55.774 "nvme_admin": false, 00:14:55.774 "nvme_io": false 00:14:55.774 }, 00:14:55.774 "driver_specific": { 00:14:55.774 "lvol": { 00:14:55.774 "lvol_store_uuid": "6bcfee8f-0020-4b92-a876-3a5e9bbf58fe", 00:14:55.774 "base_bdev": "aio_bdev", 00:14:55.774 "thin_provision": false, 00:14:55.774 "num_allocated_clusters": 38, 00:14:55.774 "snapshot": false, 00:14:55.774 "clone": false, 00:14:55.774 "esnap_clone": false 00:14:55.774 } 00:14:55.774 } 00:14:55.774 } 00:14:55.774 ] 00:14:56.035 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:14:56.035 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:56.035 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:56.035 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:56.035 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:56.035 11:23:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:56.295 11:23:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:56.295 11:23:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f8278253-241e-4a6c-9269-2945fc93a0e7 00:14:56.295 11:23:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6bcfee8f-0020-4b92-a876-3a5e9bbf58fe 00:14:56.555 11:23:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.816 00:14:56.816 real 0m16.894s 00:14:56.816 user 0m44.805s 00:14:56.816 sys 0m2.324s 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:56.816 ************************************ 00:14:56.816 END TEST lvs_grow_dirty 00:14:56.816 ************************************ 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:56.816 nvmf_trace.0 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:56.816 rmmod nvme_rdma 00:14:56.816 rmmod nvme_fabrics 00:14:56.816 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3549082 ']' 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3549082 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 3549082 ']' 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 3549082 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:56.817 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3549082 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3549082' 00:14:57.078 killing process with pid 3549082 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 3549082 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 3549082 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:57.078 00:14:57.078 real 0m40.886s 00:14:57.078 user 1m5.934s 00:14:57.078 sys 0m9.035s 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:57.078 11:23:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:57.078 ************************************ 00:14:57.078 END TEST nvmf_lvs_grow 00:14:57.078 ************************************ 00:14:57.078 11:23:25 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:57.078 11:23:25 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:57.078 11:23:25 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:57.078 11:23:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:57.078 ************************************ 00:14:57.078 START TEST nvmf_bdev_io_wait 00:14:57.078 ************************************ 00:14:57.078 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:57.339 * Looking for test storage... 00:14:57.339 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:57.339 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:57.340 11:23:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.487 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:05.487 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:05.488 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:05.488 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:05.488 Found net devices under 0000:98:00.0: mlx_0_0 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:05.488 Found net devices under 0000:98:00.1: mlx_0_1 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:05.488 11:23:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:05.488 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:05.488 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:15:05.488 altname enp152s0f0np0 00:15:05.488 altname ens817f0np0 00:15:05.488 inet 192.168.100.8/24 scope global mlx_0_0 00:15:05.488 valid_lft forever preferred_lft forever 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:05.488 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:05.489 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:05.489 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:15:05.489 altname enp152s0f1np1 00:15:05.489 altname ens817f1np1 00:15:05.489 inet 192.168.100.9/24 scope global mlx_0_1 00:15:05.489 valid_lft forever preferred_lft forever 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:05.489 192.168.100.9' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:05.489 192.168.100.9' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:05.489 192.168.100.9' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3553497 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3553497 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 3553497 ']' 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.489 [2024-06-10 11:23:33.217615] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:05.489 [2024-06-10 11:23:33.217666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.489 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.489 [2024-06-10 11:23:33.278256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.489 [2024-06-10 11:23:33.344544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.489 [2024-06-10 11:23:33.344582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.489 [2024-06-10 11:23:33.344590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.489 [2024-06-10 11:23:33.344596] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.489 [2024-06-10 11:23:33.344602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.489 [2024-06-10 11:23:33.344741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.489 [2024-06-10 11:23:33.344853] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.489 [2024-06-10 11:23:33.345176] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.489 [2024-06-10 11:23:33.345176] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:05.489 11:23:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.489 [2024-06-10 11:23:34.127722] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10d8000/0x10dc4f0) succeed. 00:15:05.489 [2024-06-10 11:23:34.140565] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10d9640/0x111db80) succeed. 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.489 Malloc0 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.489 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.490 [2024-06-10 11:23:34.335657] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3553840 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3553843 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.490 { 00:15:05.490 "params": { 00:15:05.490 "name": "Nvme$subsystem", 00:15:05.490 "trtype": "$TEST_TRANSPORT", 00:15:05.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.490 "adrfam": "ipv4", 00:15:05.490 "trsvcid": "$NVMF_PORT", 00:15:05.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.490 "hdgst": ${hdgst:-false}, 00:15:05.490 "ddgst": ${ddgst:-false} 00:15:05.490 }, 00:15:05.490 "method": "bdev_nvme_attach_controller" 00:15:05.490 } 00:15:05.490 EOF 00:15:05.490 )") 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3553846 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.490 { 00:15:05.490 "params": { 00:15:05.490 "name": "Nvme$subsystem", 00:15:05.490 "trtype": "$TEST_TRANSPORT", 00:15:05.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.490 "adrfam": "ipv4", 00:15:05.490 "trsvcid": "$NVMF_PORT", 00:15:05.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.490 "hdgst": ${hdgst:-false}, 00:15:05.490 "ddgst": ${ddgst:-false} 00:15:05.490 }, 00:15:05.490 "method": "bdev_nvme_attach_controller" 00:15:05.490 } 00:15:05.490 EOF 00:15:05.490 )") 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3553849 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.490 { 00:15:05.490 "params": { 00:15:05.490 "name": "Nvme$subsystem", 00:15:05.490 "trtype": "$TEST_TRANSPORT", 00:15:05.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.490 "adrfam": "ipv4", 00:15:05.490 "trsvcid": "$NVMF_PORT", 00:15:05.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.490 "hdgst": ${hdgst:-false}, 00:15:05.490 "ddgst": ${ddgst:-false} 00:15:05.490 }, 00:15:05.490 "method": "bdev_nvme_attach_controller" 00:15:05.490 } 00:15:05.490 EOF 00:15:05.490 )") 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.490 { 00:15:05.490 "params": { 00:15:05.490 "name": "Nvme$subsystem", 00:15:05.490 "trtype": "$TEST_TRANSPORT", 00:15:05.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.490 "adrfam": "ipv4", 00:15:05.490 "trsvcid": "$NVMF_PORT", 00:15:05.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.490 "hdgst": ${hdgst:-false}, 00:15:05.490 "ddgst": ${ddgst:-false} 00:15:05.490 }, 00:15:05.490 "method": "bdev_nvme_attach_controller" 00:15:05.490 } 00:15:05.490 EOF 00:15:05.490 )") 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3553840 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.490 "params": { 00:15:05.490 "name": "Nvme1", 00:15:05.490 "trtype": "rdma", 00:15:05.490 "traddr": "192.168.100.8", 00:15:05.490 "adrfam": "ipv4", 00:15:05.490 "trsvcid": "4420", 00:15:05.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.490 "hdgst": false, 00:15:05.490 "ddgst": false 00:15:05.490 }, 00:15:05.490 "method": "bdev_nvme_attach_controller" 00:15:05.490 }' 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.490 "params": { 00:15:05.490 "name": "Nvme1", 00:15:05.490 "trtype": "rdma", 00:15:05.490 "traddr": "192.168.100.8", 00:15:05.490 "adrfam": "ipv4", 00:15:05.490 "trsvcid": "4420", 00:15:05.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.490 "hdgst": false, 00:15:05.490 "ddgst": false 00:15:05.490 }, 00:15:05.490 "method": "bdev_nvme_attach_controller" 00:15:05.490 }' 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.490 "params": { 00:15:05.490 "name": "Nvme1", 00:15:05.490 "trtype": "rdma", 00:15:05.490 "traddr": "192.168.100.8", 00:15:05.490 "adrfam": "ipv4", 00:15:05.490 "trsvcid": "4420", 00:15:05.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.490 "hdgst": false, 00:15:05.490 "ddgst": false 00:15:05.490 }, 00:15:05.490 "method": "bdev_nvme_attach_controller" 00:15:05.490 }' 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:05.490 11:23:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.490 "params": { 00:15:05.490 "name": "Nvme1", 00:15:05.490 "trtype": "rdma", 00:15:05.490 "traddr": "192.168.100.8", 00:15:05.490 "adrfam": "ipv4", 00:15:05.490 "trsvcid": "4420", 00:15:05.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.490 "hdgst": false, 00:15:05.490 "ddgst": false 00:15:05.490 }, 00:15:05.490 "method": "bdev_nvme_attach_controller" 00:15:05.490 }' 00:15:05.490 [2024-06-10 11:23:34.386027] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:05.490 [2024-06-10 11:23:34.386083] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:05.490 [2024-06-10 11:23:34.388921] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:05.490 [2024-06-10 11:23:34.388966] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:05.490 [2024-06-10 11:23:34.389335] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:05.490 [2024-06-10 11:23:34.389380] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:05.490 [2024-06-10 11:23:34.389758] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:05.491 [2024-06-10 11:23:34.389808] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:05.491 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.792 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.792 [2024-06-10 11:23:34.530625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.792 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.792 [2024-06-10 11:23:34.581585] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:05.792 [2024-06-10 11:23:34.592928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.792 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.792 [2024-06-10 11:23:34.642798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.792 [2024-06-10 11:23:34.644012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:15:05.792 [2024-06-10 11:23:34.689826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.792 [2024-06-10 11:23:34.693090] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:15:05.792 [2024-06-10 11:23:34.740596] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:15:06.053 Running I/O for 1 seconds... 00:15:06.053 Running I/O for 1 seconds... 00:15:06.053 Running I/O for 1 seconds... 00:15:06.053 Running I/O for 1 seconds... 00:15:06.995 00:15:06.995 Latency(us) 00:15:06.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.995 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:06.995 Nvme1n1 : 1.00 18917.45 73.90 0.00 0.00 6745.56 4724.05 16384.00 00:15:06.995 =================================================================================================================== 00:15:06.995 Total : 18917.45 73.90 0.00 0.00 6745.56 4724.05 16384.00 00:15:06.995 00:15:06.995 Latency(us) 00:15:06.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.995 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:06.995 Nvme1n1 : 1.00 189218.31 739.13 0.00 0.00 673.73 261.12 2553.17 00:15:06.995 =================================================================================================================== 00:15:06.995 Total : 189218.31 739.13 0.00 0.00 673.73 261.12 2553.17 00:15:06.995 00:15:06.995 Latency(us) 00:15:06.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.995 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:06.995 Nvme1n1 : 1.00 28646.90 111.90 0.00 0.00 4457.71 2839.89 14964.05 00:15:06.995 =================================================================================================================== 00:15:06.995 Total : 28646.90 111.90 0.00 0.00 4457.71 2839.89 14964.05 00:15:06.995 00:15:06.995 Latency(us) 00:15:06.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.995 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:06.995 Nvme1n1 : 1.00 16726.46 65.34 0.00 0.00 7630.63 4287.15 19005.44 00:15:06.995 =================================================================================================================== 00:15:06.995 Total : 16726.46 65.34 0.00 0.00 7630.63 4287.15 19005.44 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3553843 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3553846 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3553849 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:07.257 rmmod nvme_rdma 00:15:07.257 rmmod nvme_fabrics 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3553497 ']' 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3553497 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 3553497 ']' 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 3553497 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:07.257 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3553497 00:15:07.518 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:07.518 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:07.518 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3553497' 00:15:07.518 killing process with pid 3553497 00:15:07.518 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 3553497 00:15:07.518 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 3553497 00:15:07.518 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.518 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:07.518 00:15:07.518 real 0m10.464s 00:15:07.518 user 0m19.862s 00:15:07.519 sys 0m6.304s 00:15:07.519 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:07.519 11:23:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:07.519 ************************************ 00:15:07.519 END TEST nvmf_bdev_io_wait 00:15:07.519 ************************************ 00:15:07.779 11:23:36 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:15:07.779 11:23:36 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:07.779 11:23:36 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:07.779 11:23:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:07.779 ************************************ 00:15:07.779 START TEST nvmf_queue_depth 00:15:07.779 ************************************ 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:15:07.779 * Looking for test storage... 00:15:07.779 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.779 11:23:36 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:14.366 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:14.366 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:14.366 Found net devices under 0000:98:00.0: mlx_0_0 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:14.366 Found net devices under 0000:98:00.1: mlx_0_1 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:14.366 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:14.367 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:14.367 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:15:14.367 altname enp152s0f0np0 00:15:14.367 altname ens817f0np0 00:15:14.367 inet 192.168.100.8/24 scope global mlx_0_0 00:15:14.367 valid_lft forever preferred_lft forever 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:14.367 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:14.367 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:15:14.367 altname enp152s0f1np1 00:15:14.367 altname ens817f1np1 00:15:14.367 inet 192.168.100.9/24 scope global mlx_0_1 00:15:14.367 valid_lft forever preferred_lft forever 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:14.367 11:23:42 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:14.367 192.168.100.9' 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:14.367 192.168.100.9' 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:14.367 192.168.100.9' 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3557852 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3557852 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 3557852 ']' 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:14.367 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:14.367 [2024-06-10 11:23:43.111681] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:14.367 [2024-06-10 11:23:43.111749] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.367 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.367 [2024-06-10 11:23:43.194968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.367 [2024-06-10 11:23:43.284171] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.367 [2024-06-10 11:23:43.284230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.367 [2024-06-10 11:23:43.284238] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.367 [2024-06-10 11:23:43.284245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.367 [2024-06-10 11:23:43.284251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.367 [2024-06-10 11:23:43.284282] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.940 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:14.940 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:14.940 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.940 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:14.940 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.200 11:23:43 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.200 11:23:43 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:15.200 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.200 11:23:43 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.200 [2024-06-10 11:23:43.979192] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x25041d0/0x25086c0) succeed. 00:15:15.200 [2024-06-10 11:23:43.992903] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x25056d0/0x2549d50) succeed. 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.200 Malloc0 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.200 [2024-06-10 11:23:44.085520] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3557929 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3557929 /var/tmp/bdevperf.sock 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 3557929 ']' 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:15.200 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.201 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:15.201 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.201 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:15.201 [2024-06-10 11:23:44.136743] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:15.201 [2024-06-10 11:23:44.136808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3557929 ] 00:15:15.201 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.460 [2024-06-10 11:23:44.202261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.460 [2024-06-10 11:23:44.276770] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.029 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:16.029 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:16.029 11:23:44 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:16.029 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.029 11:23:44 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:16.289 NVMe0n1 00:15:16.289 11:23:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.289 11:23:45 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:16.289 Running I/O for 10 seconds... 00:15:26.282 00:15:26.282 Latency(us) 00:15:26.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.282 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:26.282 Verification LBA range: start 0x0 length 0x4000 00:15:26.282 NVMe0n1 : 10.04 15706.73 61.35 0.00 0.00 65019.52 21408.43 46093.65 00:15:26.282 =================================================================================================================== 00:15:26.282 Total : 15706.73 61.35 0.00 0.00 65019.52 21408.43 46093.65 00:15:26.282 0 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3557929 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 3557929 ']' 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 3557929 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3557929 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3557929' 00:15:26.282 killing process with pid 3557929 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 3557929 00:15:26.282 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.282 00:15:26.282 Latency(us) 00:15:26.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.282 =================================================================================================================== 00:15:26.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.282 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 3557929 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:26.542 rmmod nvme_rdma 00:15:26.542 rmmod nvme_fabrics 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3557852 ']' 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3557852 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 3557852 ']' 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 3557852 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3557852 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3557852' 00:15:26.542 killing process with pid 3557852 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 3557852 00:15:26.542 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 3557852 00:15:26.802 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.802 11:23:55 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:26.802 00:15:26.802 real 0m19.076s 00:15:26.802 user 0m25.771s 00:15:26.802 sys 0m5.390s 00:15:26.802 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:26.802 11:23:55 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:26.802 ************************************ 00:15:26.802 END TEST nvmf_queue_depth 00:15:26.802 ************************************ 00:15:26.802 11:23:55 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:15:26.802 11:23:55 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:26.802 11:23:55 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:26.802 11:23:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:26.802 ************************************ 00:15:26.802 START TEST nvmf_target_multipath 00:15:26.802 ************************************ 00:15:26.802 11:23:55 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:15:27.063 * Looking for test storage... 00:15:27.063 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.063 11:23:55 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:35.215 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.215 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:35.216 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:35.216 Found net devices under 0000:98:00.0: mlx_0_0 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:35.216 Found net devices under 0000:98:00.1: mlx_0_1 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:35.216 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:35.216 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:15:35.216 altname enp152s0f0np0 00:15:35.216 altname ens817f0np0 00:15:35.216 inet 192.168.100.8/24 scope global mlx_0_0 00:15:35.216 valid_lft forever preferred_lft forever 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:35.216 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:35.216 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:15:35.216 altname enp152s0f1np1 00:15:35.216 altname ens817f1np1 00:15:35.216 inet 192.168.100.9/24 scope global mlx_0_1 00:15:35.216 valid_lft forever preferred_lft forever 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:35.216 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:35.217 192.168.100.9' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:35.217 192.168.100.9' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:35.217 192.168.100.9' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:15:35.217 run this test only with TCP transport for now 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.217 11:24:02 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:35.217 rmmod nvme_rdma 00:15:35.217 rmmod nvme_fabrics 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:35.217 00:15:35.217 real 0m7.343s 00:15:35.217 user 0m2.098s 00:15:35.217 sys 0m5.341s 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:35.217 11:24:03 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:35.217 ************************************ 00:15:35.217 END TEST nvmf_target_multipath 00:15:35.217 ************************************ 00:15:35.217 11:24:03 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:15:35.217 11:24:03 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:35.217 11:24:03 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:35.217 11:24:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:35.217 ************************************ 00:15:35.217 START TEST nvmf_zcopy 00:15:35.217 ************************************ 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:15:35.217 * Looking for test storage... 00:15:35.217 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.217 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.218 11:24:03 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:41.803 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:41.803 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:41.803 Found net devices under 0000:98:00.0: mlx_0_0 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:41.803 Found net devices under 0000:98:00.1: mlx_0_1 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:41.803 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:41.803 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:15:41.803 altname enp152s0f0np0 00:15:41.803 altname ens817f0np0 00:15:41.803 inet 192.168.100.8/24 scope global mlx_0_0 00:15:41.803 valid_lft forever preferred_lft forever 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:41.803 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:41.804 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:41.804 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:15:41.804 altname enp152s0f1np1 00:15:41.804 altname ens817f1np1 00:15:41.804 inet 192.168.100.9/24 scope global mlx_0_1 00:15:41.804 valid_lft forever preferred_lft forever 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:41.804 192.168.100.9' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:41.804 192.168.100.9' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:41.804 192.168.100.9' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3567927 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3567927 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 3567927 ']' 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:41.804 11:24:10 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:41.804 [2024-06-10 11:24:10.332880] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:41.804 [2024-06-10 11:24:10.332953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.804 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.804 [2024-06-10 11:24:10.415277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.804 [2024-06-10 11:24:10.510114] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.804 [2024-06-10 11:24:10.510169] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.804 [2024-06-10 11:24:10.510178] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.804 [2024-06-10 11:24:10.510185] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.804 [2024-06-10 11:24:10.510191] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.804 [2024-06-10 11:24:10.510225] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.417 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:42.417 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:15:42.417 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.417 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:42.417 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.417 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.417 11:24:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:15:42.417 11:24:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:15:42.417 Unsupported transport: rdma 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # type=--id 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # id=0 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:42.418 nvmf_trace.0 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@822 -- # return 0 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:42.418 rmmod nvme_rdma 00:15:42.418 rmmod nvme_fabrics 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3567927 ']' 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3567927 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 3567927 ']' 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 3567927 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3567927 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3567927' 00:15:42.418 killing process with pid 3567927 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 3567927 00:15:42.418 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 3567927 00:15:42.678 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.678 11:24:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:42.678 00:15:42.678 real 0m8.382s 00:15:42.678 user 0m3.447s 00:15:42.678 sys 0m5.559s 00:15:42.678 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:42.678 11:24:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.678 ************************************ 00:15:42.678 END TEST nvmf_zcopy 00:15:42.678 ************************************ 00:15:42.678 11:24:11 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:42.678 11:24:11 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:42.678 11:24:11 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:42.678 11:24:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:42.678 ************************************ 00:15:42.678 START TEST nvmf_nmic 00:15:42.678 ************************************ 00:15:42.678 11:24:11 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:42.939 * Looking for test storage... 00:15:42.939 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:42.939 11:24:11 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:51.126 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:51.126 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:51.127 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:51.127 Found net devices under 0000:98:00.0: mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:51.127 Found net devices under 0000:98:00.1: mlx_0_1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:51.127 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:51.127 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:15:51.127 altname enp152s0f0np0 00:15:51.127 altname ens817f0np0 00:15:51.127 inet 192.168.100.8/24 scope global mlx_0_0 00:15:51.127 valid_lft forever preferred_lft forever 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:51.127 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:51.127 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:15:51.127 altname enp152s0f1np1 00:15:51.127 altname ens817f1np1 00:15:51.127 inet 192.168.100.9/24 scope global mlx_0_1 00:15:51.127 valid_lft forever preferred_lft forever 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:51.127 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:51.128 192.168.100.9' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:51.128 192.168.100.9' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:51.128 192.168.100.9' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3572249 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3572249 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 3572249 ']' 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:51.128 11:24:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 [2024-06-10 11:24:18.862345] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:15:51.128 [2024-06-10 11:24:18.862411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.128 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.128 [2024-06-10 11:24:18.927268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.128 [2024-06-10 11:24:19.002834] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.128 [2024-06-10 11:24:19.002872] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.128 [2024-06-10 11:24:19.002879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.128 [2024-06-10 11:24:19.002886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.128 [2024-06-10 11:24:19.002891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.128 [2024-06-10 11:24:19.002955] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.128 [2024-06-10 11:24:19.003070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.128 [2024-06-10 11:24:19.003226] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.128 [2024-06-10 11:24:19.003228] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 [2024-06-10 11:24:19.727542] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe320b0/0xe365a0) succeed. 00:15:51.128 [2024-06-10 11:24:19.742064] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe336f0/0xe77c30) succeed. 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 Malloc0 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 [2024-06-10 11:24:19.917863] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:51.128 test case1: single bdev can't be used in multiple subsystems 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 [2024-06-10 11:24:19.953640] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:51.128 [2024-06-10 11:24:19.953658] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:51.128 [2024-06-10 11:24:19.953665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.128 request: 00:15:51.128 { 00:15:51.128 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:51.128 "namespace": { 00:15:51.128 "bdev_name": "Malloc0", 00:15:51.128 "no_auto_visible": false 00:15:51.128 }, 00:15:51.128 "method": "nvmf_subsystem_add_ns", 00:15:51.128 "req_id": 1 00:15:51.128 } 00:15:51.128 Got JSON-RPC error response 00:15:51.128 response: 00:15:51.128 { 00:15:51.128 "code": -32602, 00:15:51.128 "message": "Invalid parameters" 00:15:51.128 } 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:51.128 Adding namespace failed - expected result. 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:51.128 test case2: host connect to nvmf target in multiple paths 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:51.128 [2024-06-10 11:24:19.965712] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.128 11:24:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:52.512 11:24:21 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:15:54.420 11:24:22 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.420 11:24:22 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:15:54.420 11:24:22 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.420 11:24:22 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:54.420 11:24:22 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:15:56.329 11:24:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:56.329 11:24:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:56.330 11:24:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.330 11:24:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:56.330 11:24:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.330 11:24:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:15:56.330 11:24:24 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:56.330 [global] 00:15:56.330 thread=1 00:15:56.330 invalidate=1 00:15:56.330 rw=write 00:15:56.330 time_based=1 00:15:56.330 runtime=1 00:15:56.330 ioengine=libaio 00:15:56.330 direct=1 00:15:56.330 bs=4096 00:15:56.330 iodepth=1 00:15:56.330 norandommap=0 00:15:56.330 numjobs=1 00:15:56.330 00:15:56.330 verify_dump=1 00:15:56.330 verify_backlog=512 00:15:56.330 verify_state_save=0 00:15:56.330 do_verify=1 00:15:56.330 verify=crc32c-intel 00:15:56.330 [job0] 00:15:56.330 filename=/dev/nvme0n1 00:15:56.330 Could not set queue depth (nvme0n1) 00:15:56.589 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:56.589 fio-3.35 00:15:56.589 Starting 1 thread 00:15:57.528 00:15:57.528 job0: (groupid=0, jobs=1): err= 0: pid=3573791: Mon Jun 10 11:24:26 2024 00:15:57.528 read: IOPS=7745, BW=30.3MiB/s (31.7MB/s)(30.3MiB/1001msec) 00:15:57.528 slat (nsec): min=5542, max=28988, avg=5996.98, stdev=1041.27 00:15:57.528 clat (usec): min=41, max=266, avg=53.42, stdev= 7.35 00:15:57.528 lat (usec): min=50, max=279, avg=59.42, stdev= 8.03 00:15:57.528 clat percentiles (usec): 00:15:57.528 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:15:57.528 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 54], 00:15:57.528 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 59], 95.00th=[ 60], 00:15:57.528 | 99.00th=[ 64], 99.50th=[ 68], 99.90th=[ 194], 99.95th=[ 196], 00:15:57.528 | 99.99th=[ 269] 00:15:57.528 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:15:57.528 slat (nsec): min=7718, max=41457, avg=8577.26, stdev=2252.48 00:15:57.528 clat (usec): min=30, max=319, avg=53.14, stdev=16.57 00:15:57.528 lat (usec): min=51, max=328, avg=61.72, stdev=18.14 00:15:57.528 clat percentiles (usec): 00:15:57.528 | 1.00th=[ 45], 5.00th=[ 47], 10.00th=[ 47], 20.00th=[ 49], 00:15:57.528 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:15:57.528 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 58], 95.00th=[ 59], 00:15:57.528 | 99.00th=[ 122], 99.50th=[ 202], 99.90th=[ 258], 99.95th=[ 293], 00:15:57.528 | 99.99th=[ 322] 00:15:57.528 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:15:57.528 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:15:57.528 lat (usec) : 50=30.57%, 100=68.77%, 250=0.60%, 500=0.07% 00:15:57.528 cpu : usr=10.00%, sys=15.50%, ctx=15945, majf=0, minf=1 00:15:57.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.528 issued rwts: total=7753,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.528 00:15:57.528 Run status group 0 (all jobs): 00:15:57.528 READ: bw=30.3MiB/s (31.7MB/s), 30.3MiB/s-30.3MiB/s (31.7MB/s-31.7MB/s), io=30.3MiB (31.8MB), run=1001-1001msec 00:15:57.528 WRITE: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:15:57.528 00:15:57.528 Disk stats (read/write): 00:15:57.528 nvme0n1: ios=7179/7168, merge=0/0, ticks=336/315, in_queue=651, util=91.08% 00:15:57.528 11:24:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:00.068 11:24:29 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:00.068 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:16:00.068 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:00.068 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.068 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:00.068 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:00.329 rmmod nvme_rdma 00:16:00.329 rmmod nvme_fabrics 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3572249 ']' 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3572249 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 3572249 ']' 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 3572249 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3572249 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3572249' 00:16:00.329 killing process with pid 3572249 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 3572249 00:16:00.329 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 3572249 00:16:00.590 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.590 11:24:29 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:00.590 00:16:00.590 real 0m17.789s 00:16:00.590 user 0m55.317s 00:16:00.590 sys 0m6.129s 00:16:00.590 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:00.590 11:24:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:00.590 ************************************ 00:16:00.590 END TEST nvmf_nmic 00:16:00.590 ************************************ 00:16:00.590 11:24:29 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:16:00.590 11:24:29 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:00.590 11:24:29 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:00.590 11:24:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:00.590 ************************************ 00:16:00.590 START TEST nvmf_fio_target 00:16:00.590 ************************************ 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:16:00.590 * Looking for test storage... 00:16:00.590 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.590 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:00.850 11:24:29 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.850 11:24:29 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.850 11:24:29 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.850 11:24:29 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.850 11:24:29 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.850 11:24:29 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.850 11:24:29 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:00.851 11:24:29 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:08.985 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:08.985 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:08.985 Found net devices under 0000:98:00.0: mlx_0_0 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:08.985 Found net devices under 0000:98:00.1: mlx_0_1 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:08.985 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:08.986 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:08.986 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:16:08.986 altname enp152s0f0np0 00:16:08.986 altname ens817f0np0 00:16:08.986 inet 192.168.100.8/24 scope global mlx_0_0 00:16:08.986 valid_lft forever preferred_lft forever 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:08.986 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:08.986 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:16:08.986 altname enp152s0f1np1 00:16:08.986 altname ens817f1np1 00:16:08.986 inet 192.168.100.9/24 scope global mlx_0_1 00:16:08.986 valid_lft forever preferred_lft forever 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:08.986 192.168.100.9' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:08.986 192.168.100.9' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:08.986 192.168.100.9' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3578263 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3578263 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 3578263 ']' 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:08.986 11:24:36 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.986 [2024-06-10 11:24:36.824655] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:16:08.986 [2024-06-10 11:24:36.824743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.986 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.986 [2024-06-10 11:24:36.891997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.986 [2024-06-10 11:24:36.968024] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.987 [2024-06-10 11:24:36.968063] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.987 [2024-06-10 11:24:36.968070] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.987 [2024-06-10 11:24:36.968077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.987 [2024-06-10 11:24:36.968082] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.987 [2024-06-10 11:24:36.968222] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.987 [2024-06-10 11:24:36.968361] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.987 [2024-06-10 11:24:36.968523] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.987 [2024-06-10 11:24:36.968525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.987 11:24:37 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:08.987 11:24:37 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:16:08.987 11:24:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.987 11:24:37 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:08.987 11:24:37 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.987 11:24:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.987 11:24:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:08.987 [2024-06-10 11:24:37.808389] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x144a0b0/0x144e5a0) succeed. 00:16:08.987 [2024-06-10 11:24:37.823132] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x144b6f0/0x148fc30) succeed. 00:16:09.247 11:24:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:09.247 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:09.247 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:09.507 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:09.507 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:09.768 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:09.768 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:09.768 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:09.768 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:10.028 11:24:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.288 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:10.288 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.288 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:10.288 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.547 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:10.547 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:10.807 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:10.807 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:10.807 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:11.067 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:11.067 11:24:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:11.327 11:24:40 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:11.327 [2024-06-10 11:24:40.210058] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:11.327 11:24:40 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:11.587 11:24:40 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:11.587 11:24:40 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:12.994 11:24:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:12.994 11:24:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:16:12.994 11:24:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.994 11:24:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:16:12.994 11:24:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:16:12.994 11:24:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:16:15.533 11:24:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:15.533 11:24:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:15.533 11:24:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.533 11:24:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:16:15.533 11:24:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.533 11:24:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:16:15.533 11:24:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:15.533 [global] 00:16:15.533 thread=1 00:16:15.533 invalidate=1 00:16:15.533 rw=write 00:16:15.533 time_based=1 00:16:15.533 runtime=1 00:16:15.533 ioengine=libaio 00:16:15.533 direct=1 00:16:15.533 bs=4096 00:16:15.533 iodepth=1 00:16:15.533 norandommap=0 00:16:15.533 numjobs=1 00:16:15.533 00:16:15.533 verify_dump=1 00:16:15.533 verify_backlog=512 00:16:15.533 verify_state_save=0 00:16:15.533 do_verify=1 00:16:15.533 verify=crc32c-intel 00:16:15.533 [job0] 00:16:15.533 filename=/dev/nvme0n1 00:16:15.533 [job1] 00:16:15.533 filename=/dev/nvme0n2 00:16:15.533 [job2] 00:16:15.533 filename=/dev/nvme0n3 00:16:15.533 [job3] 00:16:15.533 filename=/dev/nvme0n4 00:16:15.533 Could not set queue depth (nvme0n1) 00:16:15.533 Could not set queue depth (nvme0n2) 00:16:15.533 Could not set queue depth (nvme0n3) 00:16:15.533 Could not set queue depth (nvme0n4) 00:16:15.534 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.534 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.534 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.534 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.534 fio-3.35 00:16:15.534 Starting 4 threads 00:16:16.915 00:16:16.915 job0: (groupid=0, jobs=1): err= 0: pid=3580077: Mon Jun 10 11:24:45 2024 00:16:16.915 read: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec) 00:16:16.915 slat (nsec): min=5558, max=46643, avg=6135.24, stdev=1393.26 00:16:16.915 clat (usec): min=36, max=954, avg=56.52, stdev=14.90 00:16:16.915 lat (usec): min=50, max=961, avg=62.65, stdev=15.47 00:16:16.915 clat percentiles (usec): 00:16:16.915 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 51], 00:16:16.915 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 56], 00:16:16.915 | 70.00th=[ 58], 80.00th=[ 60], 90.00th=[ 64], 95.00th=[ 71], 00:16:16.915 | 99.00th=[ 85], 99.50th=[ 96], 99.90th=[ 233], 99.95th=[ 243], 00:16:16.915 | 99.99th=[ 955] 00:16:16.915 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:16:16.915 slat (nsec): min=7797, max=68502, avg=8502.82, stdev=1091.76 00:16:16.915 clat (usec): min=42, max=266, avg=54.56, stdev=11.43 00:16:16.915 lat (usec): min=51, max=275, avg=63.06, stdev=11.55 00:16:16.915 clat percentiles (usec): 00:16:16.915 | 1.00th=[ 46], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:16:16.915 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 55], 00:16:16.915 | 70.00th=[ 56], 80.00th=[ 58], 90.00th=[ 61], 95.00th=[ 67], 00:16:16.915 | 99.00th=[ 86], 99.50th=[ 101], 99.90th=[ 221], 99.95th=[ 229], 00:16:16.915 | 99.99th=[ 269] 00:16:16.915 bw ( KiB/s): min=32080, max=32080, per=50.29%, avg=32080.00, stdev= 0.00, samples=1 00:16:16.915 iops : min= 8020, max= 8020, avg=8020.00, stdev= 0.00, samples=1 00:16:16.915 lat (usec) : 50=18.15%, 100=81.40%, 250=0.43%, 500=0.01%, 1000=0.01% 00:16:16.915 cpu : usr=9.40%, sys=15.60%, ctx=15361, majf=0, minf=1 00:16:16.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.915 issued rwts: total=7680,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.915 job1: (groupid=0, jobs=1): err= 0: pid=3580078: Mon Jun 10 11:24:45 2024 00:16:16.915 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:16.915 slat (nsec): min=5679, max=65179, avg=16079.48, stdev=11544.98 00:16:16.915 clat (usec): min=45, max=461, avg=160.29, stdev=90.28 00:16:16.915 lat (usec): min=54, max=487, avg=176.37, stdev=97.52 00:16:16.915 clat percentiles (usec): 00:16:16.915 | 1.00th=[ 52], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 75], 00:16:16.915 | 30.00th=[ 86], 40.00th=[ 99], 50.00th=[ 113], 60.00th=[ 196], 00:16:16.915 | 70.00th=[ 229], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 318], 00:16:16.915 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 449], 99.95th=[ 461], 00:16:16.915 | 99.99th=[ 461] 00:16:16.915 write: IOPS=2683, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:16:16.915 slat (nsec): min=7611, max=54453, avg=20517.48, stdev=12744.65 00:16:16.915 clat (usec): min=44, max=473, avg=173.84, stdev=95.81 00:16:16.915 lat (usec): min=52, max=507, avg=194.36, stdev=103.71 00:16:16.915 clat percentiles (usec): 00:16:16.915 | 1.00th=[ 49], 5.00th=[ 59], 10.00th=[ 68], 20.00th=[ 79], 00:16:16.915 | 30.00th=[ 94], 40.00th=[ 103], 50.00th=[ 155], 60.00th=[ 223], 00:16:16.915 | 70.00th=[ 243], 80.00th=[ 262], 90.00th=[ 293], 95.00th=[ 338], 00:16:16.915 | 99.00th=[ 404], 99.50th=[ 429], 99.90th=[ 461], 99.95th=[ 469], 00:16:16.915 | 99.99th=[ 474] 00:16:16.915 bw ( KiB/s): min=12288, max=12288, per=19.26%, avg=12288.00, stdev= 0.00, samples=1 00:16:16.915 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:16.915 lat (usec) : 50=0.97%, 100=37.76%, 250=38.96%, 500=22.30% 00:16:16.915 cpu : usr=6.70%, sys=13.10%, ctx=5246, majf=0, minf=1 00:16:16.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.915 issued rwts: total=2560,2686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.915 job2: (groupid=0, jobs=1): err= 0: pid=3580079: Mon Jun 10 11:24:45 2024 00:16:16.915 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:16.915 slat (nsec): min=5674, max=49568, avg=15321.15, stdev=11185.47 00:16:16.915 clat (usec): min=52, max=464, avg=160.79, stdev=92.10 00:16:16.915 lat (usec): min=58, max=482, avg=176.11, stdev=98.95 00:16:16.915 clat percentiles (usec): 00:16:16.915 | 1.00th=[ 58], 5.00th=[ 64], 10.00th=[ 73], 20.00th=[ 82], 00:16:16.916 | 30.00th=[ 90], 40.00th=[ 99], 50.00th=[ 109], 60.00th=[ 192], 00:16:16.916 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 281], 95.00th=[ 334], 00:16:16.916 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 449], 99.95th=[ 449], 00:16:16.916 | 99.99th=[ 465] 00:16:16.916 write: IOPS=3033, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec); 0 zone resets 00:16:16.916 slat (nsec): min=8089, max=53781, avg=17790.73, stdev=11988.60 00:16:16.916 clat (usec): min=39, max=508, avg=154.87, stdev=94.93 00:16:16.916 lat (usec): min=59, max=517, avg=172.66, stdev=102.51 00:16:16.916 clat percentiles (usec): 00:16:16.916 | 1.00th=[ 55], 5.00th=[ 61], 10.00th=[ 69], 20.00th=[ 78], 00:16:16.916 | 30.00th=[ 86], 40.00th=[ 94], 50.00th=[ 102], 60.00th=[ 126], 00:16:16.916 | 70.00th=[ 225], 80.00th=[ 251], 90.00th=[ 285], 95.00th=[ 334], 00:16:16.916 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 453], 99.95th=[ 469], 00:16:16.916 | 99.99th=[ 510] 00:16:16.916 bw ( KiB/s): min=12288, max=12288, per=19.26%, avg=12288.00, stdev= 0.00, samples=1 00:16:16.916 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:16.916 lat (usec) : 50=0.02%, 100=45.15%, 250=34.88%, 500=19.94%, 750=0.02% 00:16:16.916 cpu : usr=6.20%, sys=12.70%, ctx=5597, majf=0, minf=1 00:16:16.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.916 issued rwts: total=2560,3037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.916 job3: (groupid=0, jobs=1): err= 0: pid=3580080: Mon Jun 10 11:24:45 2024 00:16:16.916 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(9.83MiB/1001msec) 00:16:16.916 slat (nsec): min=5800, max=60101, avg=16728.32, stdev=11340.39 00:16:16.916 clat (usec): min=52, max=462, avg=174.96, stdev=89.91 00:16:16.916 lat (usec): min=59, max=491, avg=191.69, stdev=96.47 00:16:16.916 clat percentiles (usec): 00:16:16.916 | 1.00th=[ 60], 5.00th=[ 73], 10.00th=[ 79], 20.00th=[ 89], 00:16:16.916 | 30.00th=[ 97], 40.00th=[ 106], 50.00th=[ 186], 60.00th=[ 221], 00:16:16.916 | 70.00th=[ 235], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 338], 00:16:16.916 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 453], 99.95th=[ 457], 00:16:16.916 | 99.99th=[ 461] 00:16:16.916 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:16.916 slat (nsec): min=8139, max=59683, avg=19904.30, stdev=12339.81 00:16:16.916 clat (usec): min=52, max=480, avg=172.16, stdev=91.07 00:16:16.916 lat (usec): min=60, max=488, avg=192.07, stdev=98.34 00:16:16.916 clat percentiles (usec): 00:16:16.916 | 1.00th=[ 57], 5.00th=[ 66], 10.00th=[ 76], 20.00th=[ 85], 00:16:16.916 | 30.00th=[ 94], 40.00th=[ 103], 50.00th=[ 141], 60.00th=[ 217], 00:16:16.916 | 70.00th=[ 239], 80.00th=[ 258], 90.00th=[ 289], 95.00th=[ 326], 00:16:16.916 | 99.00th=[ 383], 99.50th=[ 408], 99.90th=[ 441], 99.95th=[ 461], 00:16:16.916 | 99.99th=[ 482] 00:16:16.916 bw ( KiB/s): min=12288, max=12288, per=19.26%, avg=12288.00, stdev= 0.00, samples=1 00:16:16.916 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:16.916 lat (usec) : 100=35.14%, 250=42.41%, 500=22.45% 00:16:16.916 cpu : usr=6.60%, sys=12.70%, ctx=5078, majf=0, minf=1 00:16:16.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.916 issued rwts: total=2517,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.916 00:16:16.916 Run status group 0 (all jobs): 00:16:16.916 READ: bw=59.8MiB/s (62.7MB/s), 9.82MiB/s-30.0MiB/s (10.3MB/s-31.4MB/s), io=59.8MiB (62.7MB), run=1001-1001msec 00:16:16.916 WRITE: bw=62.3MiB/s (65.3MB/s), 9.99MiB/s-30.0MiB/s (10.5MB/s-31.4MB/s), io=62.4MiB (65.4MB), run=1001-1001msec 00:16:16.916 00:16:16.916 Disk stats (read/write): 00:16:16.916 nvme0n1: ios=6471/6656, merge=0/0, ticks=324/302, in_queue=626, util=86.27% 00:16:16.916 nvme0n2: ios=2048/2234, merge=0/0, ticks=202/237, in_queue=439, util=86.67% 00:16:16.916 nvme0n3: ios=2048/2162, merge=0/0, ticks=245/248, in_queue=493, util=88.84% 00:16:16.916 nvme0n4: ios=2048/2090, merge=0/0, ticks=246/226, in_queue=472, util=89.68% 00:16:16.916 11:24:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:16.916 [global] 00:16:16.916 thread=1 00:16:16.916 invalidate=1 00:16:16.916 rw=randwrite 00:16:16.916 time_based=1 00:16:16.916 runtime=1 00:16:16.916 ioengine=libaio 00:16:16.916 direct=1 00:16:16.916 bs=4096 00:16:16.916 iodepth=1 00:16:16.916 norandommap=0 00:16:16.916 numjobs=1 00:16:16.916 00:16:16.916 verify_dump=1 00:16:16.916 verify_backlog=512 00:16:16.916 verify_state_save=0 00:16:16.916 do_verify=1 00:16:16.916 verify=crc32c-intel 00:16:16.916 [job0] 00:16:16.916 filename=/dev/nvme0n1 00:16:16.916 [job1] 00:16:16.916 filename=/dev/nvme0n2 00:16:16.916 [job2] 00:16:16.916 filename=/dev/nvme0n3 00:16:16.916 [job3] 00:16:16.916 filename=/dev/nvme0n4 00:16:16.916 Could not set queue depth (nvme0n1) 00:16:16.916 Could not set queue depth (nvme0n2) 00:16:16.916 Could not set queue depth (nvme0n3) 00:16:16.916 Could not set queue depth (nvme0n4) 00:16:17.176 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.176 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.176 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.176 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.176 fio-3.35 00:16:17.176 Starting 4 threads 00:16:18.556 00:16:18.556 job0: (groupid=0, jobs=1): err= 0: pid=3580604: Mon Jun 10 11:24:47 2024 00:16:18.556 read: IOPS=1628, BW=6513KiB/s (6670kB/s)(6520KiB/1001msec) 00:16:18.556 slat (nsec): min=5673, max=47522, avg=21344.83, stdev=11125.62 00:16:18.556 clat (usec): min=62, max=476, avg=232.31, stdev=91.29 00:16:18.556 lat (usec): min=71, max=507, avg=253.65, stdev=95.89 00:16:18.556 clat percentiles (usec): 00:16:18.556 | 1.00th=[ 69], 5.00th=[ 81], 10.00th=[ 100], 20.00th=[ 139], 00:16:18.556 | 30.00th=[ 196], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 251], 00:16:18.556 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 363], 95.00th=[ 392], 00:16:18.556 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 474], 99.95th=[ 478], 00:16:18.556 | 99.99th=[ 478] 00:16:18.556 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:18.556 slat (nsec): min=7729, max=67827, avg=25374.11, stdev=11649.13 00:16:18.556 clat (usec): min=59, max=953, avg=249.37, stdev=98.45 00:16:18.556 lat (usec): min=68, max=986, avg=274.75, stdev=103.96 00:16:18.556 clat percentiles (usec): 00:16:18.556 | 1.00th=[ 69], 5.00th=[ 83], 10.00th=[ 104], 20.00th=[ 141], 00:16:18.556 | 30.00th=[ 219], 40.00th=[ 243], 50.00th=[ 258], 60.00th=[ 269], 00:16:18.556 | 70.00th=[ 289], 80.00th=[ 330], 90.00th=[ 383], 95.00th=[ 412], 00:16:18.556 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 490], 99.95th=[ 498], 00:16:18.556 | 99.99th=[ 955] 00:16:18.556 bw ( KiB/s): min= 8192, max= 8192, per=17.92%, avg=8192.00, stdev= 0.00, samples=1 00:16:18.556 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:18.556 lat (usec) : 100=9.27%, 250=41.73%, 500=48.97%, 1000=0.03% 00:16:18.556 cpu : usr=7.40%, sys=10.60%, ctx=3678, majf=0, minf=1 00:16:18.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.557 issued rwts: total=1630,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.557 job1: (groupid=0, jobs=1): err= 0: pid=3580605: Mon Jun 10 11:24:47 2024 00:16:18.557 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:16:18.557 slat (nsec): min=5153, max=53223, avg=10132.01, stdev=8884.48 00:16:18.557 clat (usec): min=34, max=500, avg=104.61, stdev=92.19 00:16:18.557 lat (usec): min=50, max=547, avg=114.75, stdev=98.91 00:16:18.557 clat percentiles (usec): 00:16:18.557 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:16:18.557 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 61], 00:16:18.557 | 70.00th=[ 68], 80.00th=[ 190], 90.00th=[ 260], 95.00th=[ 302], 00:16:18.557 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 482], 00:16:18.557 | 99.99th=[ 502] 00:16:18.557 write: IOPS=3753, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec); 0 zone resets 00:16:18.557 slat (nsec): min=7586, max=55436, avg=15163.51, stdev=11317.16 00:16:18.557 clat (usec): min=43, max=526, avg=134.52, stdev=118.91 00:16:18.557 lat (usec): min=51, max=561, avg=149.68, stdev=128.06 00:16:18.557 clat percentiles (usec): 00:16:18.557 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:16:18.557 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 65], 00:16:18.557 | 70.00th=[ 204], 80.00th=[ 262], 90.00th=[ 322], 95.00th=[ 383], 00:16:18.557 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 494], 99.95th=[ 502], 00:16:18.557 | 99.99th=[ 529] 00:16:18.557 bw ( KiB/s): min= 8192, max= 8192, per=17.92%, avg=8192.00, stdev= 0.00, samples=1 00:16:18.557 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:18.557 lat (usec) : 50=8.27%, 100=62.16%, 250=11.29%, 500=18.23%, 750=0.05% 00:16:18.557 cpu : usr=8.00%, sys=11.80%, ctx=7341, majf=0, minf=1 00:16:18.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.557 issued rwts: total=3584,3757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.557 job2: (groupid=0, jobs=1): err= 0: pid=3580606: Mon Jun 10 11:24:47 2024 00:16:18.557 read: IOPS=1753, BW=7013KiB/s (7181kB/s)(7020KiB/1001msec) 00:16:18.557 slat (nsec): min=5877, max=49685, avg=19769.53, stdev=11030.74 00:16:18.557 clat (usec): min=67, max=705, avg=227.97, stdev=95.05 00:16:18.557 lat (usec): min=73, max=734, avg=247.74, stdev=99.82 00:16:18.557 clat percentiles (usec): 00:16:18.557 | 1.00th=[ 76], 5.00th=[ 85], 10.00th=[ 99], 20.00th=[ 122], 00:16:18.557 | 30.00th=[ 174], 40.00th=[ 219], 50.00th=[ 235], 60.00th=[ 251], 00:16:18.557 | 70.00th=[ 269], 80.00th=[ 302], 90.00th=[ 367], 95.00th=[ 392], 00:16:18.557 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 498], 99.95th=[ 709], 00:16:18.557 | 99.99th=[ 709] 00:16:18.557 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:18.557 slat (nsec): min=7880, max=52314, avg=23115.95, stdev=11804.64 00:16:18.557 clat (usec): min=69, max=500, avg=241.92, stdev=94.73 00:16:18.557 lat (usec): min=77, max=535, avg=265.04, stdev=99.41 00:16:18.557 clat percentiles (usec): 00:16:18.557 | 1.00th=[ 76], 5.00th=[ 87], 10.00th=[ 101], 20.00th=[ 130], 00:16:18.557 | 30.00th=[ 206], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 265], 00:16:18.557 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 400], 00:16:18.557 | 99.00th=[ 433], 99.50th=[ 445], 99.90th=[ 469], 99.95th=[ 478], 00:16:18.557 | 99.99th=[ 502] 00:16:18.557 bw ( KiB/s): min= 8192, max= 8192, per=17.92%, avg=8192.00, stdev= 0.00, samples=1 00:16:18.557 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:18.557 lat (usec) : 100=10.12%, 250=42.86%, 500=46.96%, 750=0.05% 00:16:18.557 cpu : usr=7.80%, sys=9.30%, ctx=3803, majf=0, minf=1 00:16:18.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.557 issued rwts: total=1755,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.557 job3: (groupid=0, jobs=1): err= 0: pid=3580607: Mon Jun 10 11:24:47 2024 00:16:18.557 read: IOPS=3199, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:16:18.557 slat (nsec): min=5180, max=63548, avg=11485.87, stdev=10358.76 00:16:18.557 clat (usec): min=50, max=505, avg=127.91, stdev=105.14 00:16:18.557 lat (usec): min=55, max=533, avg=139.40, stdev=113.06 00:16:18.557 clat percentiles (usec): 00:16:18.557 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:16:18.557 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 71], 00:16:18.557 | 70.00th=[ 139], 80.00th=[ 241], 90.00th=[ 289], 95.00th=[ 359], 00:16:18.557 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 486], 99.95th=[ 502], 00:16:18.557 | 99.99th=[ 506] 00:16:18.557 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:18.557 slat (nsec): min=7479, max=51940, avg=14341.45, stdev=10909.33 00:16:18.557 clat (usec): min=36, max=507, avg=133.09, stdev=113.29 00:16:18.557 lat (usec): min=56, max=515, avg=147.43, stdev=122.00 00:16:18.557 clat percentiles (usec): 00:16:18.557 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 57], 00:16:18.557 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 69], 00:16:18.557 | 70.00th=[ 155], 80.00th=[ 260], 90.00th=[ 318], 95.00th=[ 371], 00:16:18.557 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 461], 99.95th=[ 469], 00:16:18.557 | 99.99th=[ 506] 00:16:18.557 bw ( KiB/s): min= 8192, max= 8192, per=17.92%, avg=8192.00, stdev= 0.00, samples=1 00:16:18.557 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:18.557 lat (usec) : 50=0.25%, 100=66.64%, 250=12.73%, 500=20.32%, 750=0.06% 00:16:18.557 cpu : usr=5.20%, sys=12.50%, ctx=6787, majf=0, minf=1 00:16:18.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.557 issued rwts: total=3203,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.557 00:16:18.557 Run status group 0 (all jobs): 00:16:18.557 READ: bw=39.7MiB/s (41.6MB/s), 6513KiB/s-14.0MiB/s (6670kB/s-14.7MB/s), io=39.7MiB (41.7MB), run=1001-1001msec 00:16:18.557 WRITE: bw=44.6MiB/s (46.8MB/s), 8184KiB/s-14.7MiB/s (8380kB/s-15.4MB/s), io=44.7MiB (46.8MB), run=1001-1001msec 00:16:18.557 00:16:18.557 Disk stats (read/write): 00:16:18.557 nvme0n1: ios=1420/1536, merge=0/0, ticks=238/270, in_queue=508, util=83.37% 00:16:18.557 nvme0n2: ios=2048/2522, merge=0/0, ticks=213/311, in_queue=524, util=84.14% 00:16:18.557 nvme0n3: ios=1477/1536, merge=0/0, ticks=239/297, in_queue=536, util=87.91% 00:16:18.557 nvme0n4: ios=2560/2853, merge=0/0, ticks=260/294, in_queue=554, util=89.35% 00:16:18.557 11:24:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:18.557 [global] 00:16:18.557 thread=1 00:16:18.557 invalidate=1 00:16:18.557 rw=write 00:16:18.557 time_based=1 00:16:18.557 runtime=1 00:16:18.557 ioengine=libaio 00:16:18.557 direct=1 00:16:18.557 bs=4096 00:16:18.557 iodepth=128 00:16:18.557 norandommap=0 00:16:18.557 numjobs=1 00:16:18.557 00:16:18.557 verify_dump=1 00:16:18.557 verify_backlog=512 00:16:18.557 verify_state_save=0 00:16:18.557 do_verify=1 00:16:18.557 verify=crc32c-intel 00:16:18.557 [job0] 00:16:18.557 filename=/dev/nvme0n1 00:16:18.557 [job1] 00:16:18.557 filename=/dev/nvme0n2 00:16:18.557 [job2] 00:16:18.557 filename=/dev/nvme0n3 00:16:18.557 [job3] 00:16:18.557 filename=/dev/nvme0n4 00:16:18.557 Could not set queue depth (nvme0n1) 00:16:18.557 Could not set queue depth (nvme0n2) 00:16:18.557 Could not set queue depth (nvme0n3) 00:16:18.557 Could not set queue depth (nvme0n4) 00:16:18.818 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.818 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.818 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.818 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.818 fio-3.35 00:16:18.818 Starting 4 threads 00:16:20.203 00:16:20.203 job0: (groupid=0, jobs=1): err= 0: pid=3581128: Mon Jun 10 11:24:48 2024 00:16:20.203 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:16:20.203 slat (nsec): min=1220, max=1495.9k, avg=96169.12, stdev=220314.95 00:16:20.203 clat (usec): min=11045, max=13792, avg=12427.81, stdev=335.80 00:16:20.203 lat (usec): min=11051, max=14155, avg=12523.98, stdev=337.66 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[11600], 5.00th=[11863], 10.00th=[11994], 20.00th=[12125], 00:16:20.203 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:16:20.203 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[13042], 00:16:20.203 | 99.00th=[13304], 99.50th=[13304], 99.90th=[13566], 99.95th=[13698], 00:16:20.203 | 99.99th=[13829] 00:16:20.203 write: IOPS=5382, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1004msec); 0 zone resets 00:16:20.203 slat (nsec): min=1710, max=1218.5k, avg=91460.24, stdev=203524.35 00:16:20.203 clat (usec): min=2580, max=14321, avg=11729.01, stdev=830.28 00:16:20.203 lat (usec): min=3230, max=14323, avg=11820.47, stdev=832.27 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[ 6980], 5.00th=[11207], 10.00th=[11338], 20.00th=[11600], 00:16:20.203 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:16:20.203 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12387], 00:16:20.203 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13173], 99.95th=[13829], 00:16:20.203 | 99.99th=[14353] 00:16:20.203 bw ( KiB/s): min=20824, max=21392, per=17.67%, avg=21108.00, stdev=401.64, samples=2 00:16:20.203 iops : min= 5206, max= 5348, avg=5277.00, stdev=100.41, samples=2 00:16:20.203 lat (msec) : 4=0.15%, 10=0.82%, 20=99.03% 00:16:20.203 cpu : usr=3.09%, sys=5.28%, ctx=2384, majf=0, minf=1 00:16:20.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:20.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.203 issued rwts: total=5120,5404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.203 job1: (groupid=0, jobs=1): err= 0: pid=3581129: Mon Jun 10 11:24:48 2024 00:16:20.203 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:16:20.203 slat (nsec): min=1207, max=1625.3k, avg=96295.85, stdev=217322.54 00:16:20.203 clat (usec): min=11228, max=13613, avg=12407.18, stdev=343.02 00:16:20.203 lat (usec): min=11262, max=14140, avg=12503.47, stdev=345.08 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[11600], 5.00th=[11863], 10.00th=[11994], 20.00th=[12125], 00:16:20.203 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:16:20.203 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[13042], 00:16:20.203 | 99.00th=[13304], 99.50th=[13304], 99.90th=[13566], 99.95th=[13566], 00:16:20.203 | 99.99th=[13566] 00:16:20.203 write: IOPS=5391, BW=21.1MiB/s (22.1MB/s)(21.1MiB/1004msec); 0 zone resets 00:16:20.203 slat (nsec): min=1709, max=1338.6k, avg=91176.52, stdev=202772.03 00:16:20.203 clat (usec): min=2720, max=13776, avg=11723.59, stdev=811.04 00:16:20.203 lat (usec): min=3322, max=14269, avg=11814.77, stdev=814.82 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[ 7439], 5.00th=[11076], 10.00th=[11338], 20.00th=[11469], 00:16:20.203 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:16:20.203 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12387], 00:16:20.203 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13698], 99.95th=[13698], 00:16:20.203 | 99.99th=[13829] 00:16:20.203 bw ( KiB/s): min=20848, max=21440, per=17.70%, avg=21144.00, stdev=418.61, samples=2 00:16:20.203 iops : min= 5212, max= 5360, avg=5286.00, stdev=104.65, samples=2 00:16:20.203 lat (msec) : 4=0.12%, 10=0.82%, 20=99.06% 00:16:20.203 cpu : usr=2.09%, sys=5.98%, ctx=2602, majf=0, minf=1 00:16:20.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:20.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.203 issued rwts: total=5120,5413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.203 job2: (groupid=0, jobs=1): err= 0: pid=3581130: Mon Jun 10 11:24:48 2024 00:16:20.203 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:16:20.203 slat (nsec): min=1247, max=1774.6k, avg=97210.67, stdev=228575.64 00:16:20.203 clat (usec): min=11241, max=14312, avg=12459.48, stdev=341.08 00:16:20.203 lat (usec): min=11292, max=14692, avg=12556.69, stdev=353.45 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[11600], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:16:20.203 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12518], 00:16:20.203 | 70.00th=[12649], 80.00th=[12649], 90.00th=[12780], 95.00th=[13042], 00:16:20.203 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14091], 99.95th=[14222], 00:16:20.203 | 99.99th=[14353] 00:16:20.203 write: IOPS=5349, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1005msec); 0 zone resets 00:16:20.203 slat (nsec): min=1720, max=1524.5k, avg=90922.48, stdev=207246.76 00:16:20.203 clat (usec): min=2580, max=14320, avg=11759.67, stdev=780.33 00:16:20.203 lat (usec): min=3259, max=14323, avg=11850.59, stdev=783.85 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[ 7963], 5.00th=[11207], 10.00th=[11338], 20.00th=[11600], 00:16:20.203 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:16:20.203 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12387], 00:16:20.203 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13829], 99.95th=[13829], 00:16:20.203 | 99.99th=[14353] 00:16:20.203 bw ( KiB/s): min=20808, max=21184, per=17.57%, avg=20996.00, stdev=265.87, samples=2 00:16:20.203 iops : min= 5202, max= 5296, avg=5249.00, stdev=66.47, samples=2 00:16:20.203 lat (msec) : 4=0.06%, 10=0.83%, 20=99.11% 00:16:20.203 cpu : usr=2.59%, sys=5.88%, ctx=2385, majf=0, minf=1 00:16:20.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:20.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.203 issued rwts: total=5120,5376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.203 job3: (groupid=0, jobs=1): err= 0: pid=3581131: Mon Jun 10 11:24:48 2024 00:16:20.203 read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(53.4MiB/1001msec) 00:16:20.203 slat (nsec): min=1184, max=1263.4k, avg=35926.77, stdev=135214.03 00:16:20.203 clat (usec): min=206, max=8925, avg=4704.95, stdev=426.43 00:16:20.203 lat (usec): min=787, max=8926, avg=4740.87, stdev=424.43 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[ 3556], 5.00th=[ 4146], 10.00th=[ 4359], 20.00th=[ 4555], 00:16:20.203 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4752], 00:16:20.203 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:16:20.203 | 99.00th=[ 5473], 99.50th=[ 5800], 99.90th=[ 8356], 99.95th=[ 8455], 00:16:20.203 | 99.99th=[ 8979] 00:16:20.203 write: IOPS=13.8k, BW=53.9MiB/s (56.6MB/s)(54.0MiB/1001msec); 0 zone resets 00:16:20.203 slat (nsec): min=1666, max=1176.2k, avg=35014.58, stdev=129938.16 00:16:20.204 clat (usec): min=1056, max=5594, avg=4522.55, stdev=355.29 00:16:20.204 lat (usec): min=1082, max=5596, avg=4557.56, stdev=353.37 00:16:20.204 clat percentiles (usec): 00:16:20.204 | 1.00th=[ 3032], 5.00th=[ 3949], 10.00th=[ 4178], 20.00th=[ 4359], 00:16:20.204 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:16:20.204 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 4948], 00:16:20.204 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5407], 99.95th=[ 5407], 00:16:20.204 | 99.99th=[ 5604] 00:16:20.204 bw ( KiB/s): min=56920, max=56920, per=47.64%, avg=56920.00, stdev= 0.00, samples=1 00:16:20.204 iops : min=14230, max=14230, avg=14230.00, stdev= 0.00, samples=1 00:16:20.204 lat (usec) : 250=0.01%, 500=0.01%, 1000=0.01% 00:16:20.204 lat (msec) : 2=0.31%, 4=3.97%, 10=95.71% 00:16:20.204 cpu : usr=3.80%, sys=6.80%, ctx=1764, majf=0, minf=1 00:16:20.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:20.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.204 issued rwts: total=13675,13824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.204 00:16:20.204 Run status group 0 (all jobs): 00:16:20.204 READ: bw=113MiB/s (118MB/s), 19.9MiB/s-53.4MiB/s (20.9MB/s-56.0MB/s), io=113MiB (119MB), run=1001-1005msec 00:16:20.204 WRITE: bw=117MiB/s (122MB/s), 20.9MiB/s-53.9MiB/s (21.9MB/s-56.6MB/s), io=117MiB (123MB), run=1001-1005msec 00:16:20.204 00:16:20.204 Disk stats (read/write): 00:16:20.204 nvme0n1: ios=4387/4608, merge=0/0, ticks=17299/17379, in_queue=34678, util=86.67% 00:16:20.204 nvme0n2: ios=4347/4608, merge=0/0, ticks=17379/17389, in_queue=34768, util=86.71% 00:16:20.204 nvme0n3: ios=4312/4608, merge=0/0, ticks=17325/17328, in_queue=34653, util=88.88% 00:16:20.204 nvme0n4: ios=11743/11776, merge=0/0, ticks=49240/46823, in_queue=96063, util=89.72% 00:16:20.204 11:24:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:20.204 [global] 00:16:20.204 thread=1 00:16:20.204 invalidate=1 00:16:20.204 rw=randwrite 00:16:20.204 time_based=1 00:16:20.204 runtime=1 00:16:20.204 ioengine=libaio 00:16:20.204 direct=1 00:16:20.204 bs=4096 00:16:20.204 iodepth=128 00:16:20.204 norandommap=0 00:16:20.204 numjobs=1 00:16:20.204 00:16:20.204 verify_dump=1 00:16:20.204 verify_backlog=512 00:16:20.204 verify_state_save=0 00:16:20.204 do_verify=1 00:16:20.204 verify=crc32c-intel 00:16:20.204 [job0] 00:16:20.204 filename=/dev/nvme0n1 00:16:20.204 [job1] 00:16:20.204 filename=/dev/nvme0n2 00:16:20.204 [job2] 00:16:20.204 filename=/dev/nvme0n3 00:16:20.204 [job3] 00:16:20.204 filename=/dev/nvme0n4 00:16:20.204 Could not set queue depth (nvme0n1) 00:16:20.204 Could not set queue depth (nvme0n2) 00:16:20.204 Could not set queue depth (nvme0n3) 00:16:20.204 Could not set queue depth (nvme0n4) 00:16:20.475 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.475 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.475 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.475 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.475 fio-3.35 00:16:20.475 Starting 4 threads 00:16:21.884 00:16:21.884 job0: (groupid=0, jobs=1): err= 0: pid=3581648: Mon Jun 10 11:24:50 2024 00:16:21.884 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:16:21.884 slat (nsec): min=1161, max=2331.5k, avg=87456.44, stdev=298240.32 00:16:21.884 clat (usec): min=5559, max=13706, avg=11394.90, stdev=1058.22 00:16:21.884 lat (usec): min=5561, max=13708, avg=11482.35, stdev=1037.56 00:16:21.884 clat percentiles (usec): 00:16:21.885 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10552], 00:16:21.885 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:16:21.885 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:16:21.885 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13435], 99.95th=[13698], 00:16:21.885 | 99.99th=[13698] 00:16:21.885 write: IOPS=5836, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1003msec); 0 zone resets 00:16:21.885 slat (nsec): min=1648, max=3099.5k, avg=84677.74, stdev=284971.99 00:16:21.885 clat (usec): min=1801, max=13670, avg=10741.14, stdev=1464.97 00:16:21.885 lat (usec): min=2738, max=13680, avg=10825.82, stdev=1455.90 00:16:21.885 clat percentiles (usec): 00:16:21.885 | 1.00th=[ 5669], 5.00th=[ 7963], 10.00th=[ 8356], 20.00th=[10028], 00:16:21.885 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10945], 60.00th=[11338], 00:16:21.885 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12125], 95.00th=[12387], 00:16:21.885 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13566], 99.95th=[13566], 00:16:21.885 | 99.99th=[13698] 00:16:21.885 bw ( KiB/s): min=21240, max=24576, per=17.05%, avg=22908.00, stdev=2358.91, samples=2 00:16:21.885 iops : min= 5310, max= 6144, avg=5727.00, stdev=589.73, samples=2 00:16:21.885 lat (msec) : 2=0.01%, 4=0.16%, 10=13.13%, 20=86.71% 00:16:21.885 cpu : usr=1.60%, sys=4.39%, ctx=1864, majf=0, minf=1 00:16:21.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:21.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.885 issued rwts: total=5632,5854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.885 job1: (groupid=0, jobs=1): err= 0: pid=3581650: Mon Jun 10 11:24:50 2024 00:16:21.885 read: IOPS=9188, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1003msec) 00:16:21.885 slat (nsec): min=1134, max=4105.7k, avg=50565.01, stdev=280121.82 00:16:21.885 clat (usec): min=3037, max=14069, avg=6600.39, stdev=3290.19 00:16:21.885 lat (usec): min=3039, max=14076, avg=6650.96, stdev=3306.79 00:16:21.885 clat percentiles (usec): 00:16:21.885 | 1.00th=[ 3818], 5.00th=[ 4178], 10.00th=[ 4359], 20.00th=[ 4490], 00:16:21.885 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 5014], 00:16:21.885 | 70.00th=[ 5276], 80.00th=[11994], 90.00th=[12387], 95.00th=[12518], 00:16:21.885 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13960], 99.95th=[14091], 00:16:21.885 | 99.99th=[14091] 00:16:21.885 write: IOPS=9657, BW=37.7MiB/s (39.6MB/s)(37.8MiB/1003msec); 0 zone resets 00:16:21.885 slat (nsec): min=1611, max=3266.1k, avg=53442.28, stdev=280449.78 00:16:21.885 clat (usec): min=1367, max=22534, avg=6785.65, stdev=3777.86 00:16:21.885 lat (usec): min=2856, max=22543, avg=6839.09, stdev=3798.82 00:16:21.885 clat percentiles (usec): 00:16:21.885 | 1.00th=[ 3720], 5.00th=[ 4047], 10.00th=[ 4146], 20.00th=[ 4293], 00:16:21.885 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4686], 60.00th=[ 4883], 00:16:21.885 | 70.00th=[ 5407], 80.00th=[11863], 90.00th=[12256], 95.00th=[12387], 00:16:21.885 | 99.00th=[20579], 99.50th=[21627], 99.90th=[22414], 99.95th=[22414], 00:16:21.885 | 99.99th=[22414] 00:16:21.885 bw ( KiB/s): min=23256, max=53216, per=28.46%, avg=38236.00, stdev=21184.92, samples=2 00:16:21.885 iops : min= 5814, max=13304, avg=9559.00, stdev=5296.23, samples=2 00:16:21.885 lat (msec) : 2=0.01%, 4=3.32%, 10=71.21%, 20=24.91%, 50=0.55% 00:16:21.885 cpu : usr=1.90%, sys=4.49%, ctx=1460, majf=0, minf=1 00:16:21.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:21.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.885 issued rwts: total=9216,9686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.885 job2: (groupid=0, jobs=1): err= 0: pid=3581651: Mon Jun 10 11:24:50 2024 00:16:21.885 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:16:21.885 slat (nsec): min=1182, max=4078.8k, avg=61623.41, stdev=259438.91 00:16:21.885 clat (usec): min=2994, max=13603, avg=7994.53, stdev=2973.87 00:16:21.885 lat (usec): min=3000, max=13608, avg=8056.15, stdev=2988.66 00:16:21.885 clat percentiles (usec): 00:16:21.885 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:16:21.885 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6259], 60.00th=[ 6521], 00:16:21.885 | 70.00th=[11469], 80.00th=[12256], 90.00th=[12518], 95.00th=[12518], 00:16:21.885 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13042], 99.95th=[13173], 00:16:21.885 | 99.99th=[13566] 00:16:21.885 write: IOPS=8394, BW=32.8MiB/s (34.4MB/s)(32.9MiB/1003msec); 0 zone resets 00:16:21.885 slat (nsec): min=1648, max=4295.3k, avg=56749.34, stdev=240009.25 00:16:21.885 clat (usec): min=1156, max=13745, avg=7333.01, stdev=2850.36 00:16:21.885 lat (usec): min=1194, max=13747, avg=7389.76, stdev=2865.79 00:16:21.885 clat percentiles (usec): 00:16:21.885 | 1.00th=[ 3982], 5.00th=[ 4817], 10.00th=[ 5080], 20.00th=[ 5342], 00:16:21.885 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5932], 60.00th=[ 6063], 00:16:21.885 | 70.00th=[ 6587], 80.00th=[11863], 90.00th=[11994], 95.00th=[12256], 00:16:21.885 | 99.00th=[12518], 99.50th=[12518], 99.90th=[13698], 99.95th=[13698], 00:16:21.885 | 99.99th=[13698] 00:16:21.885 bw ( KiB/s): min=21312, max=45032, per=24.69%, avg=33172.00, stdev=16772.57, samples=2 00:16:21.885 iops : min= 5328, max=11258, avg=8293.00, stdev=4193.14, samples=2 00:16:21.885 lat (msec) : 2=0.07%, 4=0.48%, 10=70.20%, 20=29.25% 00:16:21.885 cpu : usr=2.40%, sys=4.39%, ctx=1541, majf=0, minf=1 00:16:21.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:21.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.885 issued rwts: total=8192,8420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.885 job3: (groupid=0, jobs=1): err= 0: pid=3581652: Mon Jun 10 11:24:50 2024 00:16:21.885 read: IOPS=9586, BW=37.4MiB/s (39.3MB/s)(37.5MiB/1002msec) 00:16:21.885 slat (nsec): min=1165, max=2851.0k, avg=52042.13, stdev=170268.54 00:16:21.885 clat (usec): min=871, max=13798, avg=6674.06, stdev=2661.61 00:16:21.885 lat (usec): min=1222, max=13800, avg=6726.11, stdev=2681.50 00:16:21.885 clat percentiles (usec): 00:16:21.885 | 1.00th=[ 3884], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4621], 00:16:21.885 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 5014], 60.00th=[ 5276], 00:16:21.885 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:16:21.885 | 99.00th=[12125], 99.50th=[13042], 99.90th=[13435], 99.95th=[13698], 00:16:21.885 | 99.99th=[13829] 00:16:21.885 write: IOPS=9708, BW=37.9MiB/s (39.8MB/s)(38.0MiB/1002msec); 0 zone resets 00:16:21.885 slat (nsec): min=1639, max=2528.8k, avg=49703.64, stdev=161430.47 00:16:21.885 clat (usec): min=3455, max=11875, avg=6454.38, stdev=2541.08 00:16:21.885 lat (usec): min=3462, max=13800, avg=6504.09, stdev=2562.13 00:16:21.885 clat percentiles (usec): 00:16:21.885 | 1.00th=[ 3884], 5.00th=[ 4113], 10.00th=[ 4228], 20.00th=[ 4424], 00:16:21.885 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 5080], 00:16:21.885 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:16:21.885 | 99.00th=[10814], 99.50th=[10945], 99.90th=[11207], 99.95th=[11338], 00:16:21.885 | 99.99th=[11863] 00:16:21.885 bw ( KiB/s): min=25872, max=25872, per=19.26%, avg=25872.00, stdev= 0.00, samples=1 00:16:21.885 iops : min= 6468, max= 6468, avg=6468.00, stdev= 0.00, samples=1 00:16:21.885 lat (usec) : 1000=0.01% 00:16:21.885 lat (msec) : 2=0.15%, 4=1.73%, 10=80.27%, 20=17.84% 00:16:21.885 cpu : usr=1.50%, sys=5.79%, ctx=2071, majf=0, minf=1 00:16:21.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:21.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.885 issued rwts: total=9606,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.885 00:16:21.885 Run status group 0 (all jobs): 00:16:21.885 READ: bw=127MiB/s (133MB/s), 21.9MiB/s-37.4MiB/s (23.0MB/s-39.3MB/s), io=128MiB (134MB), run=1002-1003msec 00:16:21.885 WRITE: bw=131MiB/s (138MB/s), 22.8MiB/s-37.9MiB/s (23.9MB/s-39.8MB/s), io=132MiB (138MB), run=1002-1003msec 00:16:21.885 00:16:21.885 Disk stats (read/write): 00:16:21.885 nvme0n1: ios=4847/5120, merge=0/0, ticks=16451/16551, in_queue=33002, util=86.17% 00:16:21.885 nvme0n2: ios=8629/8704, merge=0/0, ticks=15390/15466, in_queue=30856, util=86.71% 00:16:21.885 nvme0n3: ios=7364/7680, merge=0/0, ticks=14566/14590, in_queue=29156, util=88.88% 00:16:21.885 nvme0n4: ios=7630/7680, merge=0/0, ticks=17190/16430, in_queue=33620, util=89.62% 00:16:21.885 11:24:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:21.885 11:24:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3581732 00:16:21.885 11:24:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:21.885 11:24:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:21.885 [global] 00:16:21.885 thread=1 00:16:21.885 invalidate=1 00:16:21.885 rw=read 00:16:21.885 time_based=1 00:16:21.885 runtime=10 00:16:21.885 ioengine=libaio 00:16:21.885 direct=1 00:16:21.885 bs=4096 00:16:21.885 iodepth=1 00:16:21.885 norandommap=1 00:16:21.885 numjobs=1 00:16:21.885 00:16:21.885 [job0] 00:16:21.885 filename=/dev/nvme0n1 00:16:21.885 [job1] 00:16:21.885 filename=/dev/nvme0n2 00:16:21.885 [job2] 00:16:21.885 filename=/dev/nvme0n3 00:16:21.885 [job3] 00:16:21.885 filename=/dev/nvme0n4 00:16:21.885 Could not set queue depth (nvme0n1) 00:16:21.885 Could not set queue depth (nvme0n2) 00:16:21.885 Could not set queue depth (nvme0n3) 00:16:21.885 Could not set queue depth (nvme0n4) 00:16:22.153 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.153 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.153 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.153 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.153 fio-3.35 00:16:22.153 Starting 4 threads 00:16:24.696 11:24:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:24.957 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=55640064, buflen=4096 00:16:24.957 fio: pid=3582141, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:24.957 11:24:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:24.957 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=59232256, buflen=4096 00:16:24.957 fio: pid=3582135, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:24.957 11:24:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:24.957 11:24:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:25.217 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=11001856, buflen=4096 00:16:25.217 fio: pid=3582107, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:25.217 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.217 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:25.478 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=25243648, buflen=4096 00:16:25.478 fio: pid=3582119, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:25.478 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.478 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:25.478 00:16:25.478 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3582107: Mon Jun 10 11:24:54 2024 00:16:25.478 read: IOPS=6531, BW=25.5MiB/s (26.8MB/s)(74.5MiB/2920msec) 00:16:25.478 slat (usec): min=5, max=12910, avg=15.58, stdev=176.92 00:16:25.478 clat (usec): min=27, max=476, avg=134.40, stdev=98.98 00:16:25.478 lat (usec): min=50, max=13183, avg=149.98, stdev=207.66 00:16:25.478 clat percentiles (usec): 00:16:25.478 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 55], 00:16:25.478 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 76], 60.00th=[ 93], 00:16:25.478 | 70.00th=[ 225], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 297], 00:16:25.478 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 437], 99.95th=[ 445], 00:16:25.478 | 99.99th=[ 469] 00:16:25.478 bw ( KiB/s): min=15792, max=29632, per=26.99%, avg=23883.20, stdev=5879.18, samples=5 00:16:25.478 iops : min= 3948, max= 7408, avg=5970.80, stdev=1469.80, samples=5 00:16:25.478 lat (usec) : 50=6.27%, 100=54.84%, 250=17.88%, 500=21.01% 00:16:25.478 cpu : usr=4.73%, sys=13.43%, ctx=19076, majf=0, minf=1 00:16:25.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.478 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.478 issued rwts: total=19071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.478 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3582119: Mon Jun 10 11:24:54 2024 00:16:25.478 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(88.1MiB/3149msec) 00:16:25.478 slat (usec): min=5, max=12931, avg=14.68, stdev=186.58 00:16:25.478 clat (usec): min=32, max=8814, avg=122.21, stdev=110.81 00:16:25.478 lat (usec): min=50, max=13211, avg=136.89, stdev=221.27 00:16:25.478 clat percentiles (usec): 00:16:25.478 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:16:25.478 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 70], 60.00th=[ 78], 00:16:25.478 | 70.00th=[ 145], 80.00th=[ 239], 90.00th=[ 269], 95.00th=[ 285], 00:16:25.478 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 441], 99.95th=[ 461], 00:16:25.478 | 99.99th=[ 498] 00:16:25.478 bw ( KiB/s): min=18352, max=42088, per=31.82%, avg=28159.00, stdev=8519.90, samples=6 00:16:25.478 iops : min= 4588, max=10522, avg=7039.67, stdev=2129.91, samples=6 00:16:25.478 lat (usec) : 50=7.51%, 100=59.43%, 250=16.45%, 500=16.60% 00:16:25.478 lat (msec) : 10=0.01% 00:16:25.478 cpu : usr=4.67%, sys=13.31%, ctx=22554, majf=0, minf=1 00:16:25.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.478 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.478 issued rwts: total=22548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.478 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3582135: Mon Jun 10 11:24:54 2024 00:16:25.478 read: IOPS=5234, BW=20.4MiB/s (21.4MB/s)(56.5MiB/2763msec) 00:16:25.478 slat (usec): min=5, max=15489, avg=17.00, stdev=161.95 00:16:25.478 clat (usec): min=47, max=494, avg=169.53, stdev=102.22 00:16:25.478 lat (usec): min=56, max=15734, avg=186.53, stdev=195.67 00:16:25.478 clat percentiles (usec): 00:16:25.478 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 68], 00:16:25.478 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 147], 60.00th=[ 231], 00:16:25.478 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 343], 00:16:25.478 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 453], 99.95th=[ 461], 00:16:25.478 | 99.99th=[ 469] 00:16:25.478 bw ( KiB/s): min=16144, max=29696, per=24.26%, avg=21467.20, stdev=5185.30, samples=5 00:16:25.478 iops : min= 4036, max= 7424, avg=5366.80, stdev=1296.33, samples=5 00:16:25.478 lat (usec) : 50=0.02%, 100=44.45%, 250=26.06%, 500=29.46% 00:16:25.478 cpu : usr=3.62%, sys=13.00%, ctx=14464, majf=0, minf=1 00:16:25.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.478 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.478 issued rwts: total=14462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.478 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3582141: Mon Jun 10 11:24:54 2024 00:16:25.478 read: IOPS=5235, BW=20.4MiB/s (21.4MB/s)(53.1MiB/2595msec) 00:16:25.478 slat (nsec): min=5254, max=64842, avg=15681.71, stdev=11340.92 00:16:25.478 clat (usec): min=43, max=560, avg=171.42, stdev=100.53 00:16:25.478 lat (usec): min=55, max=566, avg=187.11, stdev=106.97 00:16:25.478 clat percentiles (usec): 00:16:25.478 | 1.00th=[ 56], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 73], 00:16:25.478 | 30.00th=[ 77], 40.00th=[ 87], 50.00th=[ 149], 60.00th=[ 231], 00:16:25.478 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 338], 00:16:25.478 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 457], 99.95th=[ 465], 00:16:25.478 | 99.99th=[ 502] 00:16:25.478 bw ( KiB/s): min=14616, max=28424, per=23.92%, avg=21166.40, stdev=5604.73, samples=5 00:16:25.478 iops : min= 3654, max= 7106, avg=5291.60, stdev=1401.18, samples=5 00:16:25.478 lat (usec) : 50=0.02%, 100=43.70%, 250=26.59%, 500=29.67%, 750=0.01% 00:16:25.478 cpu : usr=4.28%, sys=12.95%, ctx=13585, majf=0, minf=2 00:16:25.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.478 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.478 issued rwts: total=13585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.478 00:16:25.478 Run status group 0 (all jobs): 00:16:25.478 READ: bw=86.4MiB/s (90.6MB/s), 20.4MiB/s-28.0MiB/s (21.4MB/s-29.3MB/s), io=272MiB (285MB), run=2595-3149msec 00:16:25.478 00:16:25.478 Disk stats (read/write): 00:16:25.478 nvme0n1: ios=18200/0, merge=0/0, ticks=1638/0, in_queue=1638, util=93.79% 00:16:25.478 nvme0n2: ios=21961/0, merge=0/0, ticks=1870/0, in_queue=1870, util=94.06% 00:16:25.478 nvme0n3: ios=13828/0, merge=0/0, ticks=1644/0, in_queue=1644, util=96.16% 00:16:25.478 nvme0n4: ios=12701/0, merge=0/0, ticks=1460/0, in_queue=1460, util=96.09% 00:16:25.739 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.739 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:25.739 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.739 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:26.000 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:26.000 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:26.261 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:26.261 11:24:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:26.261 11:24:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:26.261 11:24:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 3581732 00:16:26.261 11:24:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:26.261 11:24:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:27.644 nvmf hotplug test: fio failed as expected 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:27.644 rmmod nvme_rdma 00:16:27.644 rmmod nvme_fabrics 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3578263 ']' 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3578263 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 3578263 ']' 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 3578263 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:27.644 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3578263 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3578263' 00:16:27.905 killing process with pid 3578263 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 3578263 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 3578263 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:27.905 00:16:27.905 real 0m27.406s 00:16:27.905 user 2m36.533s 00:16:27.905 sys 0m10.161s 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:27.905 11:24:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.905 ************************************ 00:16:27.905 END TEST nvmf_fio_target 00:16:27.905 ************************************ 00:16:28.166 11:24:56 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:16:28.166 11:24:56 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:28.166 11:24:56 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:28.166 11:24:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:28.166 ************************************ 00:16:28.166 START TEST nvmf_bdevio 00:16:28.166 ************************************ 00:16:28.166 11:24:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:16:28.166 * Looking for test storage... 00:16:28.166 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.166 11:24:57 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:34.752 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.752 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:34.752 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:34.753 Found net devices under 0000:98:00.0: mlx_0_0 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:34.753 Found net devices under 0000:98:00.1: mlx_0_1 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:34.753 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:34.753 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:16:34.753 altname enp152s0f0np0 00:16:34.753 altname ens817f0np0 00:16:34.753 inet 192.168.100.8/24 scope global mlx_0_0 00:16:34.753 valid_lft forever preferred_lft forever 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:34.753 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:34.753 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:16:34.753 altname enp152s0f1np1 00:16:34.753 altname ens817f1np1 00:16:34.753 inet 192.168.100.9/24 scope global mlx_0_1 00:16:34.753 valid_lft forever preferred_lft forever 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:34.753 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:35.015 192.168.100.9' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:35.015 192.168.100.9' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:35.015 192.168.100.9' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3586899 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3586899 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 3586899 ']' 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:35.015 11:25:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:35.015 [2024-06-10 11:25:03.846801] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:16:35.015 [2024-06-10 11:25:03.846869] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.015 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.015 [2024-06-10 11:25:03.930127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.276 [2024-06-10 11:25:04.021985] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.276 [2024-06-10 11:25:04.022041] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.276 [2024-06-10 11:25:04.022050] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.276 [2024-06-10 11:25:04.022057] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.276 [2024-06-10 11:25:04.022063] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.276 [2024-06-10 11:25:04.022236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:16:35.276 [2024-06-10 11:25:04.022397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:16:35.276 [2024-06-10 11:25:04.022560] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.276 [2024-06-10 11:25:04.022561] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:35.846 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:35.846 [2024-06-10 11:25:04.716291] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dda9d0/0x1ddeec0) succeed. 00:16:35.846 [2024-06-10 11:25:04.730906] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ddc010/0x1e20550) succeed. 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:36.107 Malloc0 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:36.107 [2024-06-10 11:25:04.933810] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:36.107 { 00:16:36.107 "params": { 00:16:36.107 "name": "Nvme$subsystem", 00:16:36.107 "trtype": "$TEST_TRANSPORT", 00:16:36.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.107 "adrfam": "ipv4", 00:16:36.107 "trsvcid": "$NVMF_PORT", 00:16:36.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.107 "hdgst": ${hdgst:-false}, 00:16:36.107 "ddgst": ${ddgst:-false} 00:16:36.107 }, 00:16:36.107 "method": "bdev_nvme_attach_controller" 00:16:36.107 } 00:16:36.107 EOF 00:16:36.107 )") 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:36.107 11:25:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:36.107 "params": { 00:16:36.107 "name": "Nvme1", 00:16:36.107 "trtype": "rdma", 00:16:36.107 "traddr": "192.168.100.8", 00:16:36.107 "adrfam": "ipv4", 00:16:36.107 "trsvcid": "4420", 00:16:36.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.107 "hdgst": false, 00:16:36.107 "ddgst": false 00:16:36.107 }, 00:16:36.107 "method": "bdev_nvme_attach_controller" 00:16:36.107 }' 00:16:36.107 [2024-06-10 11:25:04.988313] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:16:36.107 [2024-06-10 11:25:04.988381] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3587135 ] 00:16:36.107 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.107 [2024-06-10 11:25:05.056260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:36.398 [2024-06-10 11:25:05.132728] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.398 [2024-06-10 11:25:05.132870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.398 [2024-06-10 11:25:05.132965] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.398 I/O targets: 00:16:36.398 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:36.398 00:16:36.398 00:16:36.398 CUnit - A unit testing framework for C - Version 2.1-3 00:16:36.398 http://cunit.sourceforge.net/ 00:16:36.398 00:16:36.398 00:16:36.399 Suite: bdevio tests on: Nvme1n1 00:16:36.399 Test: blockdev write read block ...passed 00:16:36.399 Test: blockdev write zeroes read block ...passed 00:16:36.399 Test: blockdev write zeroes read no split ...passed 00:16:36.399 Test: blockdev write zeroes read split ...passed 00:16:36.399 Test: blockdev write zeroes read split partial ...passed 00:16:36.399 Test: blockdev reset ...[2024-06-10 11:25:05.349713] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:36.659 [2024-06-10 11:25:05.379131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:36.659 [2024-06-10 11:25:05.420429] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:36.659 passed 00:16:36.659 Test: blockdev write read 8 blocks ...passed 00:16:36.659 Test: blockdev write read size > 128k ...passed 00:16:36.659 Test: blockdev write read invalid size ...passed 00:16:36.659 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.659 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.659 Test: blockdev write read max offset ...passed 00:16:36.659 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.659 Test: blockdev writev readv 8 blocks ...passed 00:16:36.659 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.659 Test: blockdev writev readv block ...passed 00:16:36.659 Test: blockdev writev readv size > 128k ...passed 00:16:36.659 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.659 Test: blockdev comparev and writev ...[2024-06-10 11:25:05.425679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.659 [2024-06-10 11:25:05.425705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:36.659 [2024-06-10 11:25:05.425713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.659 [2024-06-10 11:25:05.425718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:36.659 [2024-06-10 11:25:05.425895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.659 [2024-06-10 11:25:05.425902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:36.659 [2024-06-10 11:25:05.425909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.659 [2024-06-10 11:25:05.425914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:36.659 [2024-06-10 11:25:05.426089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.659 [2024-06-10 11:25:05.426097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:36.659 [2024-06-10 11:25:05.426104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.659 [2024-06-10 11:25:05.426109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:36.659 [2024-06-10 11:25:05.426291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.659 [2024-06-10 11:25:05.426299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:36.659 [2024-06-10 11:25:05.426305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.659 [2024-06-10 11:25:05.426311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:36.659 passed 00:16:36.659 Test: blockdev nvme passthru rw ...passed 00:16:36.659 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:25:05.427158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:36.660 [2024-06-10 11:25:05.427167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:36.660 [2024-06-10 11:25:05.427207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:36.660 [2024-06-10 11:25:05.427212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:36.660 [2024-06-10 11:25:05.427262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:36.660 [2024-06-10 11:25:05.427268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:36.660 [2024-06-10 11:25:05.427316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:36.660 [2024-06-10 11:25:05.427324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:36.660 passed 00:16:36.660 Test: blockdev nvme admin passthru ...passed 00:16:36.660 Test: blockdev copy ...passed 00:16:36.660 00:16:36.660 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.660 suites 1 1 n/a 0 0 00:16:36.660 tests 23 23 23 0 0 00:16:36.660 asserts 152 152 152 0 n/a 00:16:36.660 00:16:36.660 Elapsed time = 0.252 seconds 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:36.660 rmmod nvme_rdma 00:16:36.660 rmmod nvme_fabrics 00:16:36.660 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.920 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:36.920 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:36.920 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3586899 ']' 00:16:36.920 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3586899 00:16:36.920 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 3586899 ']' 00:16:36.920 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 3586899 00:16:36.920 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:16:36.921 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:36.921 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3586899 00:16:36.921 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:16:36.921 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:16:36.921 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3586899' 00:16:36.921 killing process with pid 3586899 00:16:36.921 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 3586899 00:16:36.921 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 3586899 00:16:37.181 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:37.181 11:25:05 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:37.181 00:16:37.181 real 0m9.055s 00:16:37.181 user 0m10.597s 00:16:37.181 sys 0m5.674s 00:16:37.181 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:37.181 11:25:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:37.181 ************************************ 00:16:37.181 END TEST nvmf_bdevio 00:16:37.181 ************************************ 00:16:37.181 11:25:06 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:37.181 11:25:06 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:37.181 11:25:06 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:37.181 11:25:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:37.181 ************************************ 00:16:37.181 START TEST nvmf_auth_target 00:16:37.181 ************************************ 00:16:37.181 11:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:37.181 * Looking for test storage... 00:16:37.181 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:37.181 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:37.442 11:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:45.584 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.584 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:45.585 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:45.585 Found net devices under 0000:98:00.0: mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:45.585 Found net devices under 0000:98:00.1: mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:45.585 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:45.585 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:16:45.585 altname enp152s0f0np0 00:16:45.585 altname ens817f0np0 00:16:45.585 inet 192.168.100.8/24 scope global mlx_0_0 00:16:45.585 valid_lft forever preferred_lft forever 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:45.585 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:45.585 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:16:45.585 altname enp152s0f1np1 00:16:45.585 altname ens817f1np1 00:16:45.585 inet 192.168.100.9/24 scope global mlx_0_1 00:16:45.585 valid_lft forever preferred_lft forever 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:45.585 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:45.586 192.168.100.9' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:45.586 192.168.100.9' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:45.586 192.168.100.9' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3590960 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3590960 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3590960 ']' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:45.586 11:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3591298 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8a3defeb29a6c871a19e9abebc932918e5ee0db6990d105f 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6qB 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8a3defeb29a6c871a19e9abebc932918e5ee0db6990d105f 0 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8a3defeb29a6c871a19e9abebc932918e5ee0db6990d105f 0 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8a3defeb29a6c871a19e9abebc932918e5ee0db6990d105f 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6qB 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6qB 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.6qB 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42b5a2de17b0e223c1db99f28c9ec5de292ec8a5785d4dd8e593a9d55b59c871 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2UW 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42b5a2de17b0e223c1db99f28c9ec5de292ec8a5785d4dd8e593a9d55b59c871 3 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42b5a2de17b0e223c1db99f28c9ec5de292ec8a5785d4dd8e593a9d55b59c871 3 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42b5a2de17b0e223c1db99f28c9ec5de292ec8a5785d4dd8e593a9d55b59c871 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2UW 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2UW 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.2UW 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=47f41fd3a15dfdd046f875b1f2cb3959 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6OY 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 47f41fd3a15dfdd046f875b1f2cb3959 1 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 47f41fd3a15dfdd046f875b1f2cb3959 1 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=47f41fd3a15dfdd046f875b1f2cb3959 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6OY 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6OY 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.6OY 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:45.586 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c5f69ee642d8f1c75edaed0b316a55068fc76f455fe81d3b 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qyi 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c5f69ee642d8f1c75edaed0b316a55068fc76f455fe81d3b 2 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c5f69ee642d8f1c75edaed0b316a55068fc76f455fe81d3b 2 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c5f69ee642d8f1c75edaed0b316a55068fc76f455fe81d3b 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qyi 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qyi 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.qyi 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b029ab4352ae6456df8ef8bb5bb9e64c42b47a486393a405 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2FR 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b029ab4352ae6456df8ef8bb5bb9e64c42b47a486393a405 2 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b029ab4352ae6456df8ef8bb5bb9e64c42b47a486393a405 2 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b029ab4352ae6456df8ef8bb5bb9e64c42b47a486393a405 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2FR 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2FR 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.2FR 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=db5ad97d757b724ee402216a51bb8332 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rzV 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key db5ad97d757b724ee402216a51bb8332 1 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 db5ad97d757b724ee402216a51bb8332 1 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=db5ad97d757b724ee402216a51bb8332 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rzV 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rzV 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.rzV 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=622253e214e343a42d5639a96a48be900e7808396ff21e3301000badf675564a 00:16:45.587 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:45.848 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gyw 00:16:45.848 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 622253e214e343a42d5639a96a48be900e7808396ff21e3301000badf675564a 3 00:16:45.848 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 622253e214e343a42d5639a96a48be900e7808396ff21e3301000badf675564a 3 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=622253e214e343a42d5639a96a48be900e7808396ff21e3301000badf675564a 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gyw 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gyw 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.gyw 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3590960 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3590960 ']' 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3591298 /var/tmp/host.sock 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3591298 ']' 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:45.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:45.849 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.110 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:46.110 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:16:46.110 11:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:46.110 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.110 11:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.110 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.110 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.110 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6qB 00:16:46.110 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.110 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.110 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.110 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.6qB 00:16:46.110 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.6qB 00:16:46.371 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.2UW ]] 00:16:46.371 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2UW 00:16:46.371 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.371 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.371 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.371 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2UW 00:16:46.371 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2UW 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6OY 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6OY 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6OY 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.qyi ]] 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qyi 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qyi 00:16:46.632 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qyi 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2FR 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.2FR 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.2FR 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.rzV ]] 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rzV 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rzV 00:16:46.893 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rzV 00:16:47.154 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:47.154 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gyw 00:16:47.154 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:47.154 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.154 11:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:47.154 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gyw 00:16:47.154 11:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gyw 00:16:47.154 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:47.154 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:47.154 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.154 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.154 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.154 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.415 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.675 00:16:47.675 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.675 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.675 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.938 { 00:16:47.938 "cntlid": 1, 00:16:47.938 "qid": 0, 00:16:47.938 "state": "enabled", 00:16:47.938 "listen_address": { 00:16:47.938 "trtype": "RDMA", 00:16:47.938 "adrfam": "IPv4", 00:16:47.938 "traddr": "192.168.100.8", 00:16:47.938 "trsvcid": "4420" 00:16:47.938 }, 00:16:47.938 "peer_address": { 00:16:47.938 "trtype": "RDMA", 00:16:47.938 "adrfam": "IPv4", 00:16:47.938 "traddr": "192.168.100.8", 00:16:47.938 "trsvcid": "44209" 00:16:47.938 }, 00:16:47.938 "auth": { 00:16:47.938 "state": "completed", 00:16:47.938 "digest": "sha256", 00:16:47.938 "dhgroup": "null" 00:16:47.938 } 00:16:47.938 } 00:16:47.938 ]' 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.938 11:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.199 11:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:16:49.142 11:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.142 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:49.142 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.142 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.142 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.142 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.142 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.142 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.404 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.665 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.665 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.665 { 00:16:49.665 "cntlid": 3, 00:16:49.665 "qid": 0, 00:16:49.665 "state": "enabled", 00:16:49.665 "listen_address": { 00:16:49.665 "trtype": "RDMA", 00:16:49.665 "adrfam": "IPv4", 00:16:49.665 "traddr": "192.168.100.8", 00:16:49.665 "trsvcid": "4420" 00:16:49.665 }, 00:16:49.665 "peer_address": { 00:16:49.665 "trtype": "RDMA", 00:16:49.666 "adrfam": "IPv4", 00:16:49.666 "traddr": "192.168.100.8", 00:16:49.666 "trsvcid": "41254" 00:16:49.666 }, 00:16:49.666 "auth": { 00:16:49.666 "state": "completed", 00:16:49.666 "digest": "sha256", 00:16:49.666 "dhgroup": "null" 00:16:49.666 } 00:16:49.666 } 00:16:49.666 ]' 00:16:49.666 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.666 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.666 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.926 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:49.926 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.926 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.926 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.926 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.927 11:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:16:50.868 11:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.129 11:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:51.129 11:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:51.129 11:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.129 11:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:51.129 11:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.129 11:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:51.129 11:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.129 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.389 00:16:51.389 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.389 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.389 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.650 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.650 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.650 11:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:51.650 11:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.650 11:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:51.650 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.650 { 00:16:51.650 "cntlid": 5, 00:16:51.650 "qid": 0, 00:16:51.650 "state": "enabled", 00:16:51.650 "listen_address": { 00:16:51.650 "trtype": "RDMA", 00:16:51.650 "adrfam": "IPv4", 00:16:51.650 "traddr": "192.168.100.8", 00:16:51.650 "trsvcid": "4420" 00:16:51.650 }, 00:16:51.650 "peer_address": { 00:16:51.650 "trtype": "RDMA", 00:16:51.650 "adrfam": "IPv4", 00:16:51.650 "traddr": "192.168.100.8", 00:16:51.650 "trsvcid": "35436" 00:16:51.650 }, 00:16:51.650 "auth": { 00:16:51.650 "state": "completed", 00:16:51.651 "digest": "sha256", 00:16:51.651 "dhgroup": "null" 00:16:51.651 } 00:16:51.651 } 00:16:51.651 ]' 00:16:51.651 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.651 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.651 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.651 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:51.651 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.651 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.651 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.651 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.910 11:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:16:52.849 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.850 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:52.850 11:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:52.850 11:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.850 11:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:52.850 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.850 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.850 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.110 11:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.379 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.379 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.379 { 00:16:53.379 "cntlid": 7, 00:16:53.379 "qid": 0, 00:16:53.379 "state": "enabled", 00:16:53.379 "listen_address": { 00:16:53.379 "trtype": "RDMA", 00:16:53.379 "adrfam": "IPv4", 00:16:53.379 "traddr": "192.168.100.8", 00:16:53.379 "trsvcid": "4420" 00:16:53.379 }, 00:16:53.379 "peer_address": { 00:16:53.379 "trtype": "RDMA", 00:16:53.379 "adrfam": "IPv4", 00:16:53.379 "traddr": "192.168.100.8", 00:16:53.379 "trsvcid": "33169" 00:16:53.379 }, 00:16:53.379 "auth": { 00:16:53.379 "state": "completed", 00:16:53.379 "digest": "sha256", 00:16:53.379 "dhgroup": "null" 00:16:53.379 } 00:16:53.379 } 00:16:53.379 ]' 00:16:53.638 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.638 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.638 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.638 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:53.639 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.639 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.639 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.639 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.898 11:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.837 11:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.097 00:16:55.097 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.097 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.097 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.358 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.358 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.358 11:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.358 11:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.358 11:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.358 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.358 { 00:16:55.358 "cntlid": 9, 00:16:55.358 "qid": 0, 00:16:55.358 "state": "enabled", 00:16:55.358 "listen_address": { 00:16:55.358 "trtype": "RDMA", 00:16:55.358 "adrfam": "IPv4", 00:16:55.358 "traddr": "192.168.100.8", 00:16:55.358 "trsvcid": "4420" 00:16:55.358 }, 00:16:55.358 "peer_address": { 00:16:55.358 "trtype": "RDMA", 00:16:55.358 "adrfam": "IPv4", 00:16:55.358 "traddr": "192.168.100.8", 00:16:55.358 "trsvcid": "37960" 00:16:55.358 }, 00:16:55.358 "auth": { 00:16:55.359 "state": "completed", 00:16:55.359 "digest": "sha256", 00:16:55.359 "dhgroup": "ffdhe2048" 00:16:55.359 } 00:16:55.359 } 00:16:55.359 ]' 00:16:55.359 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.359 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.359 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.359 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.359 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.359 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.359 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.359 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.619 11:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:16:56.606 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.606 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:56.606 11:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.606 11:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.606 11:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.606 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.606 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.606 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.866 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.867 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.127 00:16:57.127 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.127 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.127 11:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.127 { 00:16:57.127 "cntlid": 11, 00:16:57.127 "qid": 0, 00:16:57.127 "state": "enabled", 00:16:57.127 "listen_address": { 00:16:57.127 "trtype": "RDMA", 00:16:57.127 "adrfam": "IPv4", 00:16:57.127 "traddr": "192.168.100.8", 00:16:57.127 "trsvcid": "4420" 00:16:57.127 }, 00:16:57.127 "peer_address": { 00:16:57.127 "trtype": "RDMA", 00:16:57.127 "adrfam": "IPv4", 00:16:57.127 "traddr": "192.168.100.8", 00:16:57.127 "trsvcid": "41120" 00:16:57.127 }, 00:16:57.127 "auth": { 00:16:57.127 "state": "completed", 00:16:57.127 "digest": "sha256", 00:16:57.127 "dhgroup": "ffdhe2048" 00:16:57.127 } 00:16:57.127 } 00:16:57.127 ]' 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.127 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.387 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.387 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.387 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.387 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.387 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.387 11:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:16:58.328 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.589 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.852 00:16:58.852 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.852 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.852 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.113 { 00:16:59.113 "cntlid": 13, 00:16:59.113 "qid": 0, 00:16:59.113 "state": "enabled", 00:16:59.113 "listen_address": { 00:16:59.113 "trtype": "RDMA", 00:16:59.113 "adrfam": "IPv4", 00:16:59.113 "traddr": "192.168.100.8", 00:16:59.113 "trsvcid": "4420" 00:16:59.113 }, 00:16:59.113 "peer_address": { 00:16:59.113 "trtype": "RDMA", 00:16:59.113 "adrfam": "IPv4", 00:16:59.113 "traddr": "192.168.100.8", 00:16:59.113 "trsvcid": "32910" 00:16:59.113 }, 00:16:59.113 "auth": { 00:16:59.113 "state": "completed", 00:16:59.113 "digest": "sha256", 00:16:59.113 "dhgroup": "ffdhe2048" 00:16:59.113 } 00:16:59.113 } 00:16:59.113 ]' 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.113 11:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.373 11:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:17:00.314 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.314 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:00.314 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.314 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.314 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.314 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.314 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:00.314 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.574 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.834 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.834 { 00:17:00.834 "cntlid": 15, 00:17:00.834 "qid": 0, 00:17:00.834 "state": "enabled", 00:17:00.834 "listen_address": { 00:17:00.834 "trtype": "RDMA", 00:17:00.834 "adrfam": "IPv4", 00:17:00.834 "traddr": "192.168.100.8", 00:17:00.834 "trsvcid": "4420" 00:17:00.834 }, 00:17:00.834 "peer_address": { 00:17:00.834 "trtype": "RDMA", 00:17:00.834 "adrfam": "IPv4", 00:17:00.834 "traddr": "192.168.100.8", 00:17:00.834 "trsvcid": "36358" 00:17:00.834 }, 00:17:00.834 "auth": { 00:17:00.834 "state": "completed", 00:17:00.834 "digest": "sha256", 00:17:00.834 "dhgroup": "ffdhe2048" 00:17:00.834 } 00:17:00.834 } 00:17:00.834 ]' 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.834 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.095 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.095 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.095 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.095 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.095 11:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.095 11:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:17:02.035 11:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.035 11:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:02.036 11:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.036 11:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 11:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.036 11:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.036 11:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.036 11:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:02.036 11:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.296 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.556 00:17:02.556 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.556 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.556 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.816 { 00:17:02.816 "cntlid": 17, 00:17:02.816 "qid": 0, 00:17:02.816 "state": "enabled", 00:17:02.816 "listen_address": { 00:17:02.816 "trtype": "RDMA", 00:17:02.816 "adrfam": "IPv4", 00:17:02.816 "traddr": "192.168.100.8", 00:17:02.816 "trsvcid": "4420" 00:17:02.816 }, 00:17:02.816 "peer_address": { 00:17:02.816 "trtype": "RDMA", 00:17:02.816 "adrfam": "IPv4", 00:17:02.816 "traddr": "192.168.100.8", 00:17:02.816 "trsvcid": "42193" 00:17:02.816 }, 00:17:02.816 "auth": { 00:17:02.816 "state": "completed", 00:17:02.816 "digest": "sha256", 00:17:02.816 "dhgroup": "ffdhe3072" 00:17:02.816 } 00:17:02.816 } 00:17:02.816 ]' 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.816 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.076 11:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:17:04.038 11:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.038 11:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:04.038 11:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.038 11:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.038 11:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.038 11:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.039 11:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.039 11:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.299 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.559 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.559 { 00:17:04.559 "cntlid": 19, 00:17:04.559 "qid": 0, 00:17:04.559 "state": "enabled", 00:17:04.559 "listen_address": { 00:17:04.559 "trtype": "RDMA", 00:17:04.559 "adrfam": "IPv4", 00:17:04.559 "traddr": "192.168.100.8", 00:17:04.559 "trsvcid": "4420" 00:17:04.559 }, 00:17:04.559 "peer_address": { 00:17:04.559 "trtype": "RDMA", 00:17:04.559 "adrfam": "IPv4", 00:17:04.559 "traddr": "192.168.100.8", 00:17:04.559 "trsvcid": "59357" 00:17:04.559 }, 00:17:04.559 "auth": { 00:17:04.559 "state": "completed", 00:17:04.559 "digest": "sha256", 00:17:04.559 "dhgroup": "ffdhe3072" 00:17:04.559 } 00:17:04.559 } 00:17:04.559 ]' 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.559 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.820 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.820 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.820 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.820 11:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:17:05.761 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.022 11:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.282 00:17:06.282 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.282 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.282 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.543 { 00:17:06.543 "cntlid": 21, 00:17:06.543 "qid": 0, 00:17:06.543 "state": "enabled", 00:17:06.543 "listen_address": { 00:17:06.543 "trtype": "RDMA", 00:17:06.543 "adrfam": "IPv4", 00:17:06.543 "traddr": "192.168.100.8", 00:17:06.543 "trsvcid": "4420" 00:17:06.543 }, 00:17:06.543 "peer_address": { 00:17:06.543 "trtype": "RDMA", 00:17:06.543 "adrfam": "IPv4", 00:17:06.543 "traddr": "192.168.100.8", 00:17:06.543 "trsvcid": "37838" 00:17:06.543 }, 00:17:06.543 "auth": { 00:17:06.543 "state": "completed", 00:17:06.543 "digest": "sha256", 00:17:06.543 "dhgroup": "ffdhe3072" 00:17:06.543 } 00:17:06.543 } 00:17:06.543 ]' 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.543 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.803 11:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:17:07.746 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.746 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:07.746 11:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.746 11:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.746 11:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.746 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.746 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.746 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.005 11:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.265 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.265 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.265 { 00:17:08.265 "cntlid": 23, 00:17:08.265 "qid": 0, 00:17:08.265 "state": "enabled", 00:17:08.265 "listen_address": { 00:17:08.265 "trtype": "RDMA", 00:17:08.265 "adrfam": "IPv4", 00:17:08.265 "traddr": "192.168.100.8", 00:17:08.265 "trsvcid": "4420" 00:17:08.265 }, 00:17:08.265 "peer_address": { 00:17:08.265 "trtype": "RDMA", 00:17:08.265 "adrfam": "IPv4", 00:17:08.265 "traddr": "192.168.100.8", 00:17:08.265 "trsvcid": "40604" 00:17:08.265 }, 00:17:08.265 "auth": { 00:17:08.265 "state": "completed", 00:17:08.266 "digest": "sha256", 00:17:08.266 "dhgroup": "ffdhe3072" 00:17:08.266 } 00:17:08.266 } 00:17:08.266 ]' 00:17:08.266 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.526 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.526 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.526 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.526 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.526 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.526 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.526 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.787 11:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.727 11:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 11:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.988 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.988 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.988 00:17:10.249 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.249 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.249 11:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.249 { 00:17:10.249 "cntlid": 25, 00:17:10.249 "qid": 0, 00:17:10.249 "state": "enabled", 00:17:10.249 "listen_address": { 00:17:10.249 "trtype": "RDMA", 00:17:10.249 "adrfam": "IPv4", 00:17:10.249 "traddr": "192.168.100.8", 00:17:10.249 "trsvcid": "4420" 00:17:10.249 }, 00:17:10.249 "peer_address": { 00:17:10.249 "trtype": "RDMA", 00:17:10.249 "adrfam": "IPv4", 00:17:10.249 "traddr": "192.168.100.8", 00:17:10.249 "trsvcid": "43567" 00:17:10.249 }, 00:17:10.249 "auth": { 00:17:10.249 "state": "completed", 00:17:10.249 "digest": "sha256", 00:17:10.249 "dhgroup": "ffdhe4096" 00:17:10.249 } 00:17:10.249 } 00:17:10.249 ]' 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.249 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.510 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.510 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.510 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.510 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.510 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.510 11:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:17:11.450 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.450 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:11.450 11:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.450 11:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.711 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.972 00:17:11.972 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.972 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.972 11:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.233 { 00:17:12.233 "cntlid": 27, 00:17:12.233 "qid": 0, 00:17:12.233 "state": "enabled", 00:17:12.233 "listen_address": { 00:17:12.233 "trtype": "RDMA", 00:17:12.233 "adrfam": "IPv4", 00:17:12.233 "traddr": "192.168.100.8", 00:17:12.233 "trsvcid": "4420" 00:17:12.233 }, 00:17:12.233 "peer_address": { 00:17:12.233 "trtype": "RDMA", 00:17:12.233 "adrfam": "IPv4", 00:17:12.233 "traddr": "192.168.100.8", 00:17:12.233 "trsvcid": "38549" 00:17:12.233 }, 00:17:12.233 "auth": { 00:17:12.233 "state": "completed", 00:17:12.233 "digest": "sha256", 00:17:12.233 "dhgroup": "ffdhe4096" 00:17:12.233 } 00:17:12.233 } 00:17:12.233 ]' 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.233 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.494 11:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:17:13.435 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.435 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:13.435 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.435 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.435 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.435 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.435 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:13.435 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:13.695 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:13.695 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.695 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.695 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:13.695 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:13.695 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.695 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.696 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.696 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.696 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.696 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.696 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.956 00:17:13.956 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.956 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.956 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.956 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.956 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.956 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.956 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.217 11:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:14.217 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.217 { 00:17:14.217 "cntlid": 29, 00:17:14.217 "qid": 0, 00:17:14.217 "state": "enabled", 00:17:14.217 "listen_address": { 00:17:14.217 "trtype": "RDMA", 00:17:14.217 "adrfam": "IPv4", 00:17:14.217 "traddr": "192.168.100.8", 00:17:14.217 "trsvcid": "4420" 00:17:14.217 }, 00:17:14.217 "peer_address": { 00:17:14.217 "trtype": "RDMA", 00:17:14.217 "adrfam": "IPv4", 00:17:14.217 "traddr": "192.168.100.8", 00:17:14.217 "trsvcid": "52075" 00:17:14.217 }, 00:17:14.217 "auth": { 00:17:14.217 "state": "completed", 00:17:14.217 "digest": "sha256", 00:17:14.217 "dhgroup": "ffdhe4096" 00:17:14.217 } 00:17:14.217 } 00:17:14.217 ]' 00:17:14.218 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.218 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.218 11:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.218 11:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.218 11:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.218 11:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.218 11:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.218 11:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.478 11:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:17:15.420 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.420 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:15.420 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.420 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.420 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.420 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.420 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.420 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.680 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.941 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.941 { 00:17:15.941 "cntlid": 31, 00:17:15.941 "qid": 0, 00:17:15.941 "state": "enabled", 00:17:15.941 "listen_address": { 00:17:15.941 "trtype": "RDMA", 00:17:15.941 "adrfam": "IPv4", 00:17:15.941 "traddr": "192.168.100.8", 00:17:15.941 "trsvcid": "4420" 00:17:15.941 }, 00:17:15.941 "peer_address": { 00:17:15.941 "trtype": "RDMA", 00:17:15.941 "adrfam": "IPv4", 00:17:15.941 "traddr": "192.168.100.8", 00:17:15.941 "trsvcid": "47603" 00:17:15.941 }, 00:17:15.941 "auth": { 00:17:15.941 "state": "completed", 00:17:15.941 "digest": "sha256", 00:17:15.941 "dhgroup": "ffdhe4096" 00:17:15.941 } 00:17:15.941 } 00:17:15.941 ]' 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.941 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.203 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.203 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.203 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.203 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.203 11:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.203 11:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:17:17.189 11:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.189 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:17.189 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.189 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.189 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.189 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.189 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.189 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.189 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.449 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.709 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.969 { 00:17:17.969 "cntlid": 33, 00:17:17.969 "qid": 0, 00:17:17.969 "state": "enabled", 00:17:17.969 "listen_address": { 00:17:17.969 "trtype": "RDMA", 00:17:17.969 "adrfam": "IPv4", 00:17:17.969 "traddr": "192.168.100.8", 00:17:17.969 "trsvcid": "4420" 00:17:17.969 }, 00:17:17.969 "peer_address": { 00:17:17.969 "trtype": "RDMA", 00:17:17.969 "adrfam": "IPv4", 00:17:17.969 "traddr": "192.168.100.8", 00:17:17.969 "trsvcid": "48990" 00:17:17.969 }, 00:17:17.969 "auth": { 00:17:17.969 "state": "completed", 00:17:17.969 "digest": "sha256", 00:17:17.969 "dhgroup": "ffdhe6144" 00:17:17.969 } 00:17:17.969 } 00:17:17.969 ]' 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.969 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.229 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.229 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.229 11:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.229 11:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:17:19.170 11:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.170 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:19.170 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.170 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.430 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.431 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.691 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.951 { 00:17:19.951 "cntlid": 35, 00:17:19.951 "qid": 0, 00:17:19.951 "state": "enabled", 00:17:19.951 "listen_address": { 00:17:19.951 "trtype": "RDMA", 00:17:19.951 "adrfam": "IPv4", 00:17:19.951 "traddr": "192.168.100.8", 00:17:19.951 "trsvcid": "4420" 00:17:19.951 }, 00:17:19.951 "peer_address": { 00:17:19.951 "trtype": "RDMA", 00:17:19.951 "adrfam": "IPv4", 00:17:19.951 "traddr": "192.168.100.8", 00:17:19.951 "trsvcid": "38648" 00:17:19.951 }, 00:17:19.951 "auth": { 00:17:19.951 "state": "completed", 00:17:19.951 "digest": "sha256", 00:17:19.951 "dhgroup": "ffdhe6144" 00:17:19.951 } 00:17:19.951 } 00:17:19.951 ]' 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.951 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.211 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.211 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.211 11:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.211 11:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:17:21.152 11:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.152 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:21.152 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.152 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.152 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.152 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.152 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.152 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.413 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.414 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.674 00:17:21.674 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.674 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.674 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.935 { 00:17:21.935 "cntlid": 37, 00:17:21.935 "qid": 0, 00:17:21.935 "state": "enabled", 00:17:21.935 "listen_address": { 00:17:21.935 "trtype": "RDMA", 00:17:21.935 "adrfam": "IPv4", 00:17:21.935 "traddr": "192.168.100.8", 00:17:21.935 "trsvcid": "4420" 00:17:21.935 }, 00:17:21.935 "peer_address": { 00:17:21.935 "trtype": "RDMA", 00:17:21.935 "adrfam": "IPv4", 00:17:21.935 "traddr": "192.168.100.8", 00:17:21.935 "trsvcid": "37913" 00:17:21.935 }, 00:17:21.935 "auth": { 00:17:21.935 "state": "completed", 00:17:21.935 "digest": "sha256", 00:17:21.935 "dhgroup": "ffdhe6144" 00:17:21.935 } 00:17:21.935 } 00:17:21.935 ]' 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.935 11:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.195 11:25:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:17:23.136 11:25:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.136 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:23.136 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.136 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.136 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.136 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.136 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:23.136 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.397 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.657 00:17:23.657 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.657 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.657 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.917 { 00:17:23.917 "cntlid": 39, 00:17:23.917 "qid": 0, 00:17:23.917 "state": "enabled", 00:17:23.917 "listen_address": { 00:17:23.917 "trtype": "RDMA", 00:17:23.917 "adrfam": "IPv4", 00:17:23.917 "traddr": "192.168.100.8", 00:17:23.917 "trsvcid": "4420" 00:17:23.917 }, 00:17:23.917 "peer_address": { 00:17:23.917 "trtype": "RDMA", 00:17:23.917 "adrfam": "IPv4", 00:17:23.917 "traddr": "192.168.100.8", 00:17:23.917 "trsvcid": "40580" 00:17:23.917 }, 00:17:23.917 "auth": { 00:17:23.917 "state": "completed", 00:17:23.917 "digest": "sha256", 00:17:23.917 "dhgroup": "ffdhe6144" 00:17:23.917 } 00:17:23.917 } 00:17:23.917 ]' 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.917 11:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.178 11:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:17:25.119 11:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.119 11:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:25.119 11:25:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.119 11:25:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.119 11:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.119 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.119 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.119 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.119 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.379 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.950 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.950 { 00:17:25.950 "cntlid": 41, 00:17:25.950 "qid": 0, 00:17:25.950 "state": "enabled", 00:17:25.950 "listen_address": { 00:17:25.950 "trtype": "RDMA", 00:17:25.950 "adrfam": "IPv4", 00:17:25.950 "traddr": "192.168.100.8", 00:17:25.950 "trsvcid": "4420" 00:17:25.950 }, 00:17:25.950 "peer_address": { 00:17:25.950 "trtype": "RDMA", 00:17:25.950 "adrfam": "IPv4", 00:17:25.950 "traddr": "192.168.100.8", 00:17:25.950 "trsvcid": "58573" 00:17:25.950 }, 00:17:25.950 "auth": { 00:17:25.950 "state": "completed", 00:17:25.950 "digest": "sha256", 00:17:25.950 "dhgroup": "ffdhe8192" 00:17:25.950 } 00:17:25.950 } 00:17:25.950 ]' 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.950 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.211 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.211 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.211 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.211 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.211 11:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.211 11:25:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:17:27.152 11:25:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.152 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:27.152 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.152 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.152 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.152 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.152 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.413 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.030 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.030 { 00:17:28.030 "cntlid": 43, 00:17:28.030 "qid": 0, 00:17:28.030 "state": "enabled", 00:17:28.030 "listen_address": { 00:17:28.030 "trtype": "RDMA", 00:17:28.030 "adrfam": "IPv4", 00:17:28.030 "traddr": "192.168.100.8", 00:17:28.030 "trsvcid": "4420" 00:17:28.030 }, 00:17:28.030 "peer_address": { 00:17:28.030 "trtype": "RDMA", 00:17:28.030 "adrfam": "IPv4", 00:17:28.030 "traddr": "192.168.100.8", 00:17:28.030 "trsvcid": "37110" 00:17:28.030 }, 00:17:28.030 "auth": { 00:17:28.030 "state": "completed", 00:17:28.030 "digest": "sha256", 00:17:28.030 "dhgroup": "ffdhe8192" 00:17:28.030 } 00:17:28.030 } 00:17:28.030 ]' 00:17:28.030 11:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.290 11:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.290 11:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.290 11:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.290 11:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.290 11:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.290 11:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.290 11:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.551 11:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.494 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.066 00:17:30.066 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.066 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.066 11:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.326 { 00:17:30.326 "cntlid": 45, 00:17:30.326 "qid": 0, 00:17:30.326 "state": "enabled", 00:17:30.326 "listen_address": { 00:17:30.326 "trtype": "RDMA", 00:17:30.326 "adrfam": "IPv4", 00:17:30.326 "traddr": "192.168.100.8", 00:17:30.326 "trsvcid": "4420" 00:17:30.326 }, 00:17:30.326 "peer_address": { 00:17:30.326 "trtype": "RDMA", 00:17:30.326 "adrfam": "IPv4", 00:17:30.326 "traddr": "192.168.100.8", 00:17:30.326 "trsvcid": "60423" 00:17:30.326 }, 00:17:30.326 "auth": { 00:17:30.326 "state": "completed", 00:17:30.326 "digest": "sha256", 00:17:30.326 "dhgroup": "ffdhe8192" 00:17:30.326 } 00:17:30.326 } 00:17:30.326 ]' 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.326 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.586 11:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:17:31.528 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.528 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:31.528 11:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.528 11:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.528 11:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.528 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.528 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:31.528 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.789 11:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.360 00:17:32.360 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.360 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.360 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.360 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.360 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.360 11:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.360 11:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.621 { 00:17:32.621 "cntlid": 47, 00:17:32.621 "qid": 0, 00:17:32.621 "state": "enabled", 00:17:32.621 "listen_address": { 00:17:32.621 "trtype": "RDMA", 00:17:32.621 "adrfam": "IPv4", 00:17:32.621 "traddr": "192.168.100.8", 00:17:32.621 "trsvcid": "4420" 00:17:32.621 }, 00:17:32.621 "peer_address": { 00:17:32.621 "trtype": "RDMA", 00:17:32.621 "adrfam": "IPv4", 00:17:32.621 "traddr": "192.168.100.8", 00:17:32.621 "trsvcid": "46775" 00:17:32.621 }, 00:17:32.621 "auth": { 00:17:32.621 "state": "completed", 00:17:32.621 "digest": "sha256", 00:17:32.621 "dhgroup": "ffdhe8192" 00:17:32.621 } 00:17:32.621 } 00:17:32.621 ]' 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.621 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.881 11:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.823 11:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.084 00:17:34.084 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.084 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.084 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.345 { 00:17:34.345 "cntlid": 49, 00:17:34.345 "qid": 0, 00:17:34.345 "state": "enabled", 00:17:34.345 "listen_address": { 00:17:34.345 "trtype": "RDMA", 00:17:34.345 "adrfam": "IPv4", 00:17:34.345 "traddr": "192.168.100.8", 00:17:34.345 "trsvcid": "4420" 00:17:34.345 }, 00:17:34.345 "peer_address": { 00:17:34.345 "trtype": "RDMA", 00:17:34.345 "adrfam": "IPv4", 00:17:34.345 "traddr": "192.168.100.8", 00:17:34.345 "trsvcid": "36631" 00:17:34.345 }, 00:17:34.345 "auth": { 00:17:34.345 "state": "completed", 00:17:34.345 "digest": "sha384", 00:17:34.345 "dhgroup": "null" 00:17:34.345 } 00:17:34.345 } 00:17:34.345 ]' 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.345 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.626 11:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:17:35.578 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.578 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:35.578 11:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.578 11:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.578 11:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.578 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.578 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.578 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.839 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.101 00:17:36.101 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.101 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.101 11:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.101 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.101 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.101 11:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.101 11:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.101 11:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.101 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.101 { 00:17:36.101 "cntlid": 51, 00:17:36.101 "qid": 0, 00:17:36.101 "state": "enabled", 00:17:36.101 "listen_address": { 00:17:36.101 "trtype": "RDMA", 00:17:36.101 "adrfam": "IPv4", 00:17:36.101 "traddr": "192.168.100.8", 00:17:36.101 "trsvcid": "4420" 00:17:36.101 }, 00:17:36.101 "peer_address": { 00:17:36.101 "trtype": "RDMA", 00:17:36.101 "adrfam": "IPv4", 00:17:36.101 "traddr": "192.168.100.8", 00:17:36.101 "trsvcid": "44933" 00:17:36.101 }, 00:17:36.101 "auth": { 00:17:36.101 "state": "completed", 00:17:36.101 "digest": "sha384", 00:17:36.101 "dhgroup": "null" 00:17:36.101 } 00:17:36.101 } 00:17:36.101 ]' 00:17:36.101 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.361 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.361 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.361 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:36.361 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.361 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.361 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.361 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.621 11:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:17:37.560 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.560 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:37.560 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.560 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.560 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.560 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.560 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.560 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.820 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.820 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.080 { 00:17:38.080 "cntlid": 53, 00:17:38.080 "qid": 0, 00:17:38.080 "state": "enabled", 00:17:38.080 "listen_address": { 00:17:38.080 "trtype": "RDMA", 00:17:38.080 "adrfam": "IPv4", 00:17:38.080 "traddr": "192.168.100.8", 00:17:38.080 "trsvcid": "4420" 00:17:38.080 }, 00:17:38.080 "peer_address": { 00:17:38.080 "trtype": "RDMA", 00:17:38.080 "adrfam": "IPv4", 00:17:38.080 "traddr": "192.168.100.8", 00:17:38.080 "trsvcid": "34628" 00:17:38.080 }, 00:17:38.080 "auth": { 00:17:38.080 "state": "completed", 00:17:38.080 "digest": "sha384", 00:17:38.080 "dhgroup": "null" 00:17:38.080 } 00:17:38.080 } 00:17:38.080 ]' 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.080 11:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.080 11:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:38.080 11:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.340 11:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.340 11:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.340 11:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.340 11:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:17:39.281 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.281 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:39.281 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.281 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.281 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.281 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.281 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:39.281 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.540 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.799 00:17:39.799 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.799 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.799 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.058 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.059 { 00:17:40.059 "cntlid": 55, 00:17:40.059 "qid": 0, 00:17:40.059 "state": "enabled", 00:17:40.059 "listen_address": { 00:17:40.059 "trtype": "RDMA", 00:17:40.059 "adrfam": "IPv4", 00:17:40.059 "traddr": "192.168.100.8", 00:17:40.059 "trsvcid": "4420" 00:17:40.059 }, 00:17:40.059 "peer_address": { 00:17:40.059 "trtype": "RDMA", 00:17:40.059 "adrfam": "IPv4", 00:17:40.059 "traddr": "192.168.100.8", 00:17:40.059 "trsvcid": "34293" 00:17:40.059 }, 00:17:40.059 "auth": { 00:17:40.059 "state": "completed", 00:17:40.059 "digest": "sha384", 00:17:40.059 "dhgroup": "null" 00:17:40.059 } 00:17:40.059 } 00:17:40.059 ]' 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.059 11:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.318 11:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:17:40.887 11:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.147 11:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:41.147 11:26:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.147 11:26:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.147 11:26:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.147 11:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.147 11:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.147 11:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.147 11:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.415 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.415 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.679 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.679 { 00:17:41.680 "cntlid": 57, 00:17:41.680 "qid": 0, 00:17:41.680 "state": "enabled", 00:17:41.680 "listen_address": { 00:17:41.680 "trtype": "RDMA", 00:17:41.680 "adrfam": "IPv4", 00:17:41.680 "traddr": "192.168.100.8", 00:17:41.680 "trsvcid": "4420" 00:17:41.680 }, 00:17:41.680 "peer_address": { 00:17:41.680 "trtype": "RDMA", 00:17:41.680 "adrfam": "IPv4", 00:17:41.680 "traddr": "192.168.100.8", 00:17:41.680 "trsvcid": "54644" 00:17:41.680 }, 00:17:41.680 "auth": { 00:17:41.680 "state": "completed", 00:17:41.680 "digest": "sha384", 00:17:41.680 "dhgroup": "ffdhe2048" 00:17:41.680 } 00:17:41.680 } 00:17:41.680 ]' 00:17:41.680 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.680 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.680 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.680 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.680 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.940 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.940 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.940 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.940 11:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:17:42.878 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.878 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:42.878 11:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.879 11:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.879 11:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.879 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.879 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.879 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.138 11:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.139 11:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.139 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.139 11:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.397 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.398 { 00:17:43.398 "cntlid": 59, 00:17:43.398 "qid": 0, 00:17:43.398 "state": "enabled", 00:17:43.398 "listen_address": { 00:17:43.398 "trtype": "RDMA", 00:17:43.398 "adrfam": "IPv4", 00:17:43.398 "traddr": "192.168.100.8", 00:17:43.398 "trsvcid": "4420" 00:17:43.398 }, 00:17:43.398 "peer_address": { 00:17:43.398 "trtype": "RDMA", 00:17:43.398 "adrfam": "IPv4", 00:17:43.398 "traddr": "192.168.100.8", 00:17:43.398 "trsvcid": "51412" 00:17:43.398 }, 00:17:43.398 "auth": { 00:17:43.398 "state": "completed", 00:17:43.398 "digest": "sha384", 00:17:43.398 "dhgroup": "ffdhe2048" 00:17:43.398 } 00:17:43.398 } 00:17:43.398 ]' 00:17:43.398 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.656 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.656 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.656 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.656 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.656 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.656 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.656 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.917 11:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.858 11:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.121 00:17:45.121 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.121 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.121 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.380 { 00:17:45.380 "cntlid": 61, 00:17:45.380 "qid": 0, 00:17:45.380 "state": "enabled", 00:17:45.380 "listen_address": { 00:17:45.380 "trtype": "RDMA", 00:17:45.380 "adrfam": "IPv4", 00:17:45.380 "traddr": "192.168.100.8", 00:17:45.380 "trsvcid": "4420" 00:17:45.380 }, 00:17:45.380 "peer_address": { 00:17:45.380 "trtype": "RDMA", 00:17:45.380 "adrfam": "IPv4", 00:17:45.380 "traddr": "192.168.100.8", 00:17:45.380 "trsvcid": "49715" 00:17:45.380 }, 00:17:45.380 "auth": { 00:17:45.380 "state": "completed", 00:17:45.380 "digest": "sha384", 00:17:45.380 "dhgroup": "ffdhe2048" 00:17:45.380 } 00:17:45.380 } 00:17:45.380 ]' 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.380 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.640 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.640 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.640 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.640 11:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:17:46.579 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.579 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:46.579 11:26:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.579 11:26:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.579 11:26:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.579 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.579 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.579 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.839 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.099 00:17:47.099 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.099 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.099 11:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.099 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.099 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.099 11:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.099 11:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.099 11:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.099 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.099 { 00:17:47.099 "cntlid": 63, 00:17:47.099 "qid": 0, 00:17:47.099 "state": "enabled", 00:17:47.099 "listen_address": { 00:17:47.099 "trtype": "RDMA", 00:17:47.099 "adrfam": "IPv4", 00:17:47.099 "traddr": "192.168.100.8", 00:17:47.099 "trsvcid": "4420" 00:17:47.099 }, 00:17:47.099 "peer_address": { 00:17:47.099 "trtype": "RDMA", 00:17:47.099 "adrfam": "IPv4", 00:17:47.099 "traddr": "192.168.100.8", 00:17:47.099 "trsvcid": "37740" 00:17:47.099 }, 00:17:47.099 "auth": { 00:17:47.099 "state": "completed", 00:17:47.099 "digest": "sha384", 00:17:47.099 "dhgroup": "ffdhe2048" 00:17:47.099 } 00:17:47.099 } 00:17:47.099 ]' 00:17:47.099 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.359 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.359 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.359 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.359 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.359 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.359 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.359 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.619 11:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:17:48.188 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.448 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:48.448 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.448 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.448 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.448 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.448 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.449 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.709 00:17:48.709 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.709 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.709 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.969 { 00:17:48.969 "cntlid": 65, 00:17:48.969 "qid": 0, 00:17:48.969 "state": "enabled", 00:17:48.969 "listen_address": { 00:17:48.969 "trtype": "RDMA", 00:17:48.969 "adrfam": "IPv4", 00:17:48.969 "traddr": "192.168.100.8", 00:17:48.969 "trsvcid": "4420" 00:17:48.969 }, 00:17:48.969 "peer_address": { 00:17:48.969 "trtype": "RDMA", 00:17:48.969 "adrfam": "IPv4", 00:17:48.969 "traddr": "192.168.100.8", 00:17:48.969 "trsvcid": "48862" 00:17:48.969 }, 00:17:48.969 "auth": { 00:17:48.969 "state": "completed", 00:17:48.969 "digest": "sha384", 00:17:48.969 "dhgroup": "ffdhe3072" 00:17:48.969 } 00:17:48.969 } 00:17:48.969 ]' 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.969 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.229 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.229 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.229 11:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.229 11:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:17:50.169 11:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.169 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:50.169 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.169 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.169 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.169 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.169 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:50.169 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.429 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.689 00:17:50.689 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.689 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.689 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.949 { 00:17:50.949 "cntlid": 67, 00:17:50.949 "qid": 0, 00:17:50.949 "state": "enabled", 00:17:50.949 "listen_address": { 00:17:50.949 "trtype": "RDMA", 00:17:50.949 "adrfam": "IPv4", 00:17:50.949 "traddr": "192.168.100.8", 00:17:50.949 "trsvcid": "4420" 00:17:50.949 }, 00:17:50.949 "peer_address": { 00:17:50.949 "trtype": "RDMA", 00:17:50.949 "adrfam": "IPv4", 00:17:50.949 "traddr": "192.168.100.8", 00:17:50.949 "trsvcid": "35345" 00:17:50.949 }, 00:17:50.949 "auth": { 00:17:50.949 "state": "completed", 00:17:50.949 "digest": "sha384", 00:17:50.949 "dhgroup": "ffdhe3072" 00:17:50.949 } 00:17:50.949 } 00:17:50.949 ]' 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.949 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.209 11:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:17:52.151 11:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.151 11:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:52.151 11:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.152 11:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.152 11:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.152 11:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.152 11:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.152 11:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.452 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.452 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.713 { 00:17:52.713 "cntlid": 69, 00:17:52.713 "qid": 0, 00:17:52.713 "state": "enabled", 00:17:52.713 "listen_address": { 00:17:52.713 "trtype": "RDMA", 00:17:52.713 "adrfam": "IPv4", 00:17:52.713 "traddr": "192.168.100.8", 00:17:52.713 "trsvcid": "4420" 00:17:52.713 }, 00:17:52.713 "peer_address": { 00:17:52.713 "trtype": "RDMA", 00:17:52.713 "adrfam": "IPv4", 00:17:52.713 "traddr": "192.168.100.8", 00:17:52.713 "trsvcid": "42044" 00:17:52.713 }, 00:17:52.713 "auth": { 00:17:52.713 "state": "completed", 00:17:52.713 "digest": "sha384", 00:17:52.713 "dhgroup": "ffdhe3072" 00:17:52.713 } 00:17:52.713 } 00:17:52.713 ]' 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.713 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.973 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.973 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.973 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.973 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.973 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.973 11:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:17:53.913 11:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.913 11:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:53.913 11:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.913 11:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.913 11:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.913 11:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.913 11:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.914 11:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.174 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.434 00:17:54.434 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.434 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.434 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.694 { 00:17:54.694 "cntlid": 71, 00:17:54.694 "qid": 0, 00:17:54.694 "state": "enabled", 00:17:54.694 "listen_address": { 00:17:54.694 "trtype": "RDMA", 00:17:54.694 "adrfam": "IPv4", 00:17:54.694 "traddr": "192.168.100.8", 00:17:54.694 "trsvcid": "4420" 00:17:54.694 }, 00:17:54.694 "peer_address": { 00:17:54.694 "trtype": "RDMA", 00:17:54.694 "adrfam": "IPv4", 00:17:54.694 "traddr": "192.168.100.8", 00:17:54.694 "trsvcid": "60727" 00:17:54.694 }, 00:17:54.694 "auth": { 00:17:54.694 "state": "completed", 00:17:54.694 "digest": "sha384", 00:17:54.694 "dhgroup": "ffdhe3072" 00:17:54.694 } 00:17:54.694 } 00:17:54.694 ]' 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.694 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.953 11:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.892 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.152 11:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.413 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.413 { 00:17:56.413 "cntlid": 73, 00:17:56.413 "qid": 0, 00:17:56.413 "state": "enabled", 00:17:56.413 "listen_address": { 00:17:56.413 "trtype": "RDMA", 00:17:56.413 "adrfam": "IPv4", 00:17:56.413 "traddr": "192.168.100.8", 00:17:56.413 "trsvcid": "4420" 00:17:56.413 }, 00:17:56.413 "peer_address": { 00:17:56.413 "trtype": "RDMA", 00:17:56.413 "adrfam": "IPv4", 00:17:56.413 "traddr": "192.168.100.8", 00:17:56.413 "trsvcid": "34766" 00:17:56.413 }, 00:17:56.413 "auth": { 00:17:56.413 "state": "completed", 00:17:56.413 "digest": "sha384", 00:17:56.413 "dhgroup": "ffdhe4096" 00:17:56.413 } 00:17:56.413 } 00:17:56.413 ]' 00:17:56.413 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.673 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.673 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.673 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.673 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.673 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.673 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.673 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.933 11:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.873 11:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.133 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.133 11:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.133 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.394 { 00:17:58.394 "cntlid": 75, 00:17:58.394 "qid": 0, 00:17:58.394 "state": "enabled", 00:17:58.394 "listen_address": { 00:17:58.394 "trtype": "RDMA", 00:17:58.394 "adrfam": "IPv4", 00:17:58.394 "traddr": "192.168.100.8", 00:17:58.394 "trsvcid": "4420" 00:17:58.394 }, 00:17:58.394 "peer_address": { 00:17:58.394 "trtype": "RDMA", 00:17:58.394 "adrfam": "IPv4", 00:17:58.394 "traddr": "192.168.100.8", 00:17:58.394 "trsvcid": "32877" 00:17:58.394 }, 00:17:58.394 "auth": { 00:17:58.394 "state": "completed", 00:17:58.394 "digest": "sha384", 00:17:58.394 "dhgroup": "ffdhe4096" 00:17:58.394 } 00:17:58.394 } 00:17:58.394 ]' 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.394 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.654 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.654 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.654 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.654 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.654 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.654 11:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:17:59.594 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.855 11:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.115 00:18:00.115 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.115 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.115 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.375 { 00:18:00.375 "cntlid": 77, 00:18:00.375 "qid": 0, 00:18:00.375 "state": "enabled", 00:18:00.375 "listen_address": { 00:18:00.375 "trtype": "RDMA", 00:18:00.375 "adrfam": "IPv4", 00:18:00.375 "traddr": "192.168.100.8", 00:18:00.375 "trsvcid": "4420" 00:18:00.375 }, 00:18:00.375 "peer_address": { 00:18:00.375 "trtype": "RDMA", 00:18:00.375 "adrfam": "IPv4", 00:18:00.375 "traddr": "192.168.100.8", 00:18:00.375 "trsvcid": "53056" 00:18:00.375 }, 00:18:00.375 "auth": { 00:18:00.375 "state": "completed", 00:18:00.375 "digest": "sha384", 00:18:00.375 "dhgroup": "ffdhe4096" 00:18:00.375 } 00:18:00.375 } 00:18:00.375 ]' 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.375 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.635 11:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:18:01.575 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.575 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:01.575 11:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.575 11:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.575 11:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.575 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.575 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:01.575 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.835 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.095 00:18:02.095 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.095 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.095 11:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.095 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.095 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.095 11:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.095 11:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.095 11:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.095 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.095 { 00:18:02.095 "cntlid": 79, 00:18:02.095 "qid": 0, 00:18:02.095 "state": "enabled", 00:18:02.095 "listen_address": { 00:18:02.095 "trtype": "RDMA", 00:18:02.095 "adrfam": "IPv4", 00:18:02.095 "traddr": "192.168.100.8", 00:18:02.095 "trsvcid": "4420" 00:18:02.095 }, 00:18:02.095 "peer_address": { 00:18:02.095 "trtype": "RDMA", 00:18:02.095 "adrfam": "IPv4", 00:18:02.095 "traddr": "192.168.100.8", 00:18:02.095 "trsvcid": "46634" 00:18:02.095 }, 00:18:02.095 "auth": { 00:18:02.095 "state": "completed", 00:18:02.095 "digest": "sha384", 00:18:02.095 "dhgroup": "ffdhe4096" 00:18:02.095 } 00:18:02.095 } 00:18:02.095 ]' 00:18:02.095 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.355 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.355 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.355 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.355 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.355 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.355 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.355 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.615 11:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.553 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:03.554 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.554 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.554 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.554 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.554 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.554 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.554 11:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.554 11:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.813 11:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.813 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.813 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.072 00:18:04.072 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.072 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.072 11:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.332 { 00:18:04.332 "cntlid": 81, 00:18:04.332 "qid": 0, 00:18:04.332 "state": "enabled", 00:18:04.332 "listen_address": { 00:18:04.332 "trtype": "RDMA", 00:18:04.332 "adrfam": "IPv4", 00:18:04.332 "traddr": "192.168.100.8", 00:18:04.332 "trsvcid": "4420" 00:18:04.332 }, 00:18:04.332 "peer_address": { 00:18:04.332 "trtype": "RDMA", 00:18:04.332 "adrfam": "IPv4", 00:18:04.332 "traddr": "192.168.100.8", 00:18:04.332 "trsvcid": "59841" 00:18:04.332 }, 00:18:04.332 "auth": { 00:18:04.332 "state": "completed", 00:18:04.332 "digest": "sha384", 00:18:04.332 "dhgroup": "ffdhe6144" 00:18:04.332 } 00:18:04.332 } 00:18:04.332 ]' 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.332 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.594 11:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.538 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.111 00:18:06.111 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.111 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.111 11:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.111 { 00:18:06.111 "cntlid": 83, 00:18:06.111 "qid": 0, 00:18:06.111 "state": "enabled", 00:18:06.111 "listen_address": { 00:18:06.111 "trtype": "RDMA", 00:18:06.111 "adrfam": "IPv4", 00:18:06.111 "traddr": "192.168.100.8", 00:18:06.111 "trsvcid": "4420" 00:18:06.111 }, 00:18:06.111 "peer_address": { 00:18:06.111 "trtype": "RDMA", 00:18:06.111 "adrfam": "IPv4", 00:18:06.111 "traddr": "192.168.100.8", 00:18:06.111 "trsvcid": "59376" 00:18:06.111 }, 00:18:06.111 "auth": { 00:18:06.111 "state": "completed", 00:18:06.111 "digest": "sha384", 00:18:06.111 "dhgroup": "ffdhe6144" 00:18:06.111 } 00:18:06.111 } 00:18:06.111 ]' 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.111 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.437 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.437 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.437 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.437 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.437 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.437 11:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:18:07.378 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.378 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:07.378 11:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.378 11:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.378 11:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.378 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.378 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.378 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.639 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.640 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.901 00:18:07.901 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.901 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.901 11:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.163 { 00:18:08.163 "cntlid": 85, 00:18:08.163 "qid": 0, 00:18:08.163 "state": "enabled", 00:18:08.163 "listen_address": { 00:18:08.163 "trtype": "RDMA", 00:18:08.163 "adrfam": "IPv4", 00:18:08.163 "traddr": "192.168.100.8", 00:18:08.163 "trsvcid": "4420" 00:18:08.163 }, 00:18:08.163 "peer_address": { 00:18:08.163 "trtype": "RDMA", 00:18:08.163 "adrfam": "IPv4", 00:18:08.163 "traddr": "192.168.100.8", 00:18:08.163 "trsvcid": "51425" 00:18:08.163 }, 00:18:08.163 "auth": { 00:18:08.163 "state": "completed", 00:18:08.163 "digest": "sha384", 00:18:08.163 "dhgroup": "ffdhe6144" 00:18:08.163 } 00:18:08.163 } 00:18:08.163 ]' 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.163 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.425 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.425 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.425 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.425 11:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:18:09.367 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.367 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:09.367 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.367 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.367 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.367 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.367 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:09.367 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.627 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.889 00:18:09.889 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.889 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.889 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.151 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.151 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.151 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.151 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.151 11:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.151 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.151 { 00:18:10.151 "cntlid": 87, 00:18:10.151 "qid": 0, 00:18:10.151 "state": "enabled", 00:18:10.151 "listen_address": { 00:18:10.151 "trtype": "RDMA", 00:18:10.151 "adrfam": "IPv4", 00:18:10.151 "traddr": "192.168.100.8", 00:18:10.151 "trsvcid": "4420" 00:18:10.151 }, 00:18:10.151 "peer_address": { 00:18:10.151 "trtype": "RDMA", 00:18:10.151 "adrfam": "IPv4", 00:18:10.151 "traddr": "192.168.100.8", 00:18:10.151 "trsvcid": "52293" 00:18:10.151 }, 00:18:10.151 "auth": { 00:18:10.151 "state": "completed", 00:18:10.151 "digest": "sha384", 00:18:10.151 "dhgroup": "ffdhe6144" 00:18:10.151 } 00:18:10.151 } 00:18:10.151 ]' 00:18:10.151 11:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.151 11:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.151 11:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.151 11:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.151 11:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.151 11:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.151 11:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.151 11:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.412 11:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.352 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.612 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.183 00:18:12.183 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.183 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.183 11:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.183 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.183 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.183 11:26:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.183 11:26:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.183 11:26:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.183 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.183 { 00:18:12.183 "cntlid": 89, 00:18:12.183 "qid": 0, 00:18:12.183 "state": "enabled", 00:18:12.183 "listen_address": { 00:18:12.183 "trtype": "RDMA", 00:18:12.183 "adrfam": "IPv4", 00:18:12.183 "traddr": "192.168.100.8", 00:18:12.183 "trsvcid": "4420" 00:18:12.183 }, 00:18:12.183 "peer_address": { 00:18:12.183 "trtype": "RDMA", 00:18:12.183 "adrfam": "IPv4", 00:18:12.183 "traddr": "192.168.100.8", 00:18:12.183 "trsvcid": "54646" 00:18:12.183 }, 00:18:12.183 "auth": { 00:18:12.183 "state": "completed", 00:18:12.183 "digest": "sha384", 00:18:12.183 "dhgroup": "ffdhe8192" 00:18:12.184 } 00:18:12.184 } 00:18:12.184 ]' 00:18:12.184 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.444 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.444 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.444 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.444 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.444 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.444 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.444 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.704 11:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:18:13.643 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.644 11:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.213 00:18:14.213 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.213 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.213 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.474 { 00:18:14.474 "cntlid": 91, 00:18:14.474 "qid": 0, 00:18:14.474 "state": "enabled", 00:18:14.474 "listen_address": { 00:18:14.474 "trtype": "RDMA", 00:18:14.474 "adrfam": "IPv4", 00:18:14.474 "traddr": "192.168.100.8", 00:18:14.474 "trsvcid": "4420" 00:18:14.474 }, 00:18:14.474 "peer_address": { 00:18:14.474 "trtype": "RDMA", 00:18:14.474 "adrfam": "IPv4", 00:18:14.474 "traddr": "192.168.100.8", 00:18:14.474 "trsvcid": "33641" 00:18:14.474 }, 00:18:14.474 "auth": { 00:18:14.474 "state": "completed", 00:18:14.474 "digest": "sha384", 00:18:14.474 "dhgroup": "ffdhe8192" 00:18:14.474 } 00:18:14.474 } 00:18:14.474 ]' 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.474 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.733 11:26:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:18:15.671 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.671 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:15.671 11:26:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.671 11:26:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.671 11:26:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.671 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.671 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:15.671 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.931 11:26:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.500 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.500 { 00:18:16.500 "cntlid": 93, 00:18:16.500 "qid": 0, 00:18:16.500 "state": "enabled", 00:18:16.500 "listen_address": { 00:18:16.500 "trtype": "RDMA", 00:18:16.500 "adrfam": "IPv4", 00:18:16.500 "traddr": "192.168.100.8", 00:18:16.500 "trsvcid": "4420" 00:18:16.500 }, 00:18:16.500 "peer_address": { 00:18:16.500 "trtype": "RDMA", 00:18:16.500 "adrfam": "IPv4", 00:18:16.500 "traddr": "192.168.100.8", 00:18:16.500 "trsvcid": "34871" 00:18:16.500 }, 00:18:16.500 "auth": { 00:18:16.500 "state": "completed", 00:18:16.500 "digest": "sha384", 00:18:16.500 "dhgroup": "ffdhe8192" 00:18:16.500 } 00:18:16.500 } 00:18:16.500 ]' 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.500 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.760 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.760 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.760 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.760 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.760 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.760 11:26:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:18:17.699 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.699 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:17.699 11:26:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.699 11:26:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.699 11:26:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.699 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.699 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:17.699 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.959 11:26:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.529 00:18:18.529 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.529 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.529 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.789 { 00:18:18.789 "cntlid": 95, 00:18:18.789 "qid": 0, 00:18:18.789 "state": "enabled", 00:18:18.789 "listen_address": { 00:18:18.789 "trtype": "RDMA", 00:18:18.789 "adrfam": "IPv4", 00:18:18.789 "traddr": "192.168.100.8", 00:18:18.789 "trsvcid": "4420" 00:18:18.789 }, 00:18:18.789 "peer_address": { 00:18:18.789 "trtype": "RDMA", 00:18:18.789 "adrfam": "IPv4", 00:18:18.789 "traddr": "192.168.100.8", 00:18:18.789 "trsvcid": "45191" 00:18:18.789 }, 00:18:18.789 "auth": { 00:18:18.789 "state": "completed", 00:18:18.789 "digest": "sha384", 00:18:18.789 "dhgroup": "ffdhe8192" 00:18:18.789 } 00:18:18.789 } 00:18:18.789 ]' 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.789 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.049 11:26:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.987 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.248 11:26:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.248 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.508 { 00:18:20.508 "cntlid": 97, 00:18:20.508 "qid": 0, 00:18:20.508 "state": "enabled", 00:18:20.508 "listen_address": { 00:18:20.508 "trtype": "RDMA", 00:18:20.508 "adrfam": "IPv4", 00:18:20.508 "traddr": "192.168.100.8", 00:18:20.508 "trsvcid": "4420" 00:18:20.508 }, 00:18:20.508 "peer_address": { 00:18:20.508 "trtype": "RDMA", 00:18:20.508 "adrfam": "IPv4", 00:18:20.508 "traddr": "192.168.100.8", 00:18:20.508 "trsvcid": "37822" 00:18:20.508 }, 00:18:20.508 "auth": { 00:18:20.508 "state": "completed", 00:18:20.508 "digest": "sha512", 00:18:20.508 "dhgroup": "null" 00:18:20.508 } 00:18:20.508 } 00:18:20.508 ]' 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.508 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.767 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.767 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.767 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.767 11:26:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:18:21.707 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.707 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:21.707 11:26:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.707 11:26:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.967 11:26:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.245 00:18:22.245 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.245 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.245 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.549 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.549 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.549 11:26:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.549 11:26:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.549 11:26:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.549 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.549 { 00:18:22.549 "cntlid": 99, 00:18:22.549 "qid": 0, 00:18:22.549 "state": "enabled", 00:18:22.549 "listen_address": { 00:18:22.549 "trtype": "RDMA", 00:18:22.549 "adrfam": "IPv4", 00:18:22.549 "traddr": "192.168.100.8", 00:18:22.549 "trsvcid": "4420" 00:18:22.549 }, 00:18:22.550 "peer_address": { 00:18:22.550 "trtype": "RDMA", 00:18:22.550 "adrfam": "IPv4", 00:18:22.550 "traddr": "192.168.100.8", 00:18:22.550 "trsvcid": "59603" 00:18:22.550 }, 00:18:22.550 "auth": { 00:18:22.550 "state": "completed", 00:18:22.550 "digest": "sha512", 00:18:22.550 "dhgroup": "null" 00:18:22.550 } 00:18:22.550 } 00:18:22.550 ]' 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.550 11:26:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:18:23.490 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.750 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.751 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.010 00:18:24.010 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.010 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.010 11:26:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.271 { 00:18:24.271 "cntlid": 101, 00:18:24.271 "qid": 0, 00:18:24.271 "state": "enabled", 00:18:24.271 "listen_address": { 00:18:24.271 "trtype": "RDMA", 00:18:24.271 "adrfam": "IPv4", 00:18:24.271 "traddr": "192.168.100.8", 00:18:24.271 "trsvcid": "4420" 00:18:24.271 }, 00:18:24.271 "peer_address": { 00:18:24.271 "trtype": "RDMA", 00:18:24.271 "adrfam": "IPv4", 00:18:24.271 "traddr": "192.168.100.8", 00:18:24.271 "trsvcid": "34832" 00:18:24.271 }, 00:18:24.271 "auth": { 00:18:24.271 "state": "completed", 00:18:24.271 "digest": "sha512", 00:18:24.271 "dhgroup": "null" 00:18:24.271 } 00:18:24.271 } 00:18:24.271 ]' 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.271 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.530 11:26:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:18:25.467 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.467 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:25.467 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.467 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.467 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.467 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.467 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.467 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.727 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.988 00:18:25.988 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.988 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.988 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.248 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.248 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.248 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.248 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.248 11:26:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.248 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.248 { 00:18:26.248 "cntlid": 103, 00:18:26.248 "qid": 0, 00:18:26.248 "state": "enabled", 00:18:26.248 "listen_address": { 00:18:26.248 "trtype": "RDMA", 00:18:26.248 "adrfam": "IPv4", 00:18:26.248 "traddr": "192.168.100.8", 00:18:26.248 "trsvcid": "4420" 00:18:26.248 }, 00:18:26.248 "peer_address": { 00:18:26.248 "trtype": "RDMA", 00:18:26.248 "adrfam": "IPv4", 00:18:26.248 "traddr": "192.168.100.8", 00:18:26.248 "trsvcid": "53469" 00:18:26.248 }, 00:18:26.248 "auth": { 00:18:26.248 "state": "completed", 00:18:26.248 "digest": "sha512", 00:18:26.248 "dhgroup": "null" 00:18:26.248 } 00:18:26.248 } 00:18:26.248 ]' 00:18:26.248 11:26:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.249 11:26:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.249 11:26:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.249 11:26:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.249 11:26:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.249 11:26:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.249 11:26:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.249 11:26:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.509 11:26:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.450 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.710 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:27.710 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.710 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.710 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:27.710 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.710 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.710 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.711 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.711 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.711 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.711 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.711 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.711 00:18:27.711 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.711 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.711 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.970 { 00:18:27.970 "cntlid": 105, 00:18:27.970 "qid": 0, 00:18:27.970 "state": "enabled", 00:18:27.970 "listen_address": { 00:18:27.970 "trtype": "RDMA", 00:18:27.970 "adrfam": "IPv4", 00:18:27.970 "traddr": "192.168.100.8", 00:18:27.970 "trsvcid": "4420" 00:18:27.970 }, 00:18:27.970 "peer_address": { 00:18:27.970 "trtype": "RDMA", 00:18:27.970 "adrfam": "IPv4", 00:18:27.970 "traddr": "192.168.100.8", 00:18:27.970 "trsvcid": "39805" 00:18:27.970 }, 00:18:27.970 "auth": { 00:18:27.970 "state": "completed", 00:18:27.970 "digest": "sha512", 00:18:27.970 "dhgroup": "ffdhe2048" 00:18:27.970 } 00:18:27.970 } 00:18:27.970 ]' 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.970 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.231 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.231 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.231 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.231 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.231 11:26:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.231 11:26:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:18:29.174 11:26:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.437 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.698 00:18:29.698 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.698 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.698 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.960 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.960 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.960 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.960 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.960 11:26:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.960 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.960 { 00:18:29.960 "cntlid": 107, 00:18:29.960 "qid": 0, 00:18:29.960 "state": "enabled", 00:18:29.960 "listen_address": { 00:18:29.960 "trtype": "RDMA", 00:18:29.960 "adrfam": "IPv4", 00:18:29.960 "traddr": "192.168.100.8", 00:18:29.960 "trsvcid": "4420" 00:18:29.960 }, 00:18:29.960 "peer_address": { 00:18:29.960 "trtype": "RDMA", 00:18:29.960 "adrfam": "IPv4", 00:18:29.960 "traddr": "192.168.100.8", 00:18:29.960 "trsvcid": "48020" 00:18:29.960 }, 00:18:29.961 "auth": { 00:18:29.961 "state": "completed", 00:18:29.961 "digest": "sha512", 00:18:29.961 "dhgroup": "ffdhe2048" 00:18:29.961 } 00:18:29.961 } 00:18:29.961 ]' 00:18:29.961 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.961 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.961 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.961 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.961 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.961 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.961 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.961 11:26:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.222 11:26:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:18:31.164 11:26:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.164 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:31.164 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.164 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.164 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.164 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.164 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:31.164 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.425 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.686 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.686 { 00:18:31.686 "cntlid": 109, 00:18:31.686 "qid": 0, 00:18:31.686 "state": "enabled", 00:18:31.686 "listen_address": { 00:18:31.686 "trtype": "RDMA", 00:18:31.686 "adrfam": "IPv4", 00:18:31.686 "traddr": "192.168.100.8", 00:18:31.686 "trsvcid": "4420" 00:18:31.686 }, 00:18:31.686 "peer_address": { 00:18:31.686 "trtype": "RDMA", 00:18:31.686 "adrfam": "IPv4", 00:18:31.686 "traddr": "192.168.100.8", 00:18:31.686 "trsvcid": "37084" 00:18:31.686 }, 00:18:31.686 "auth": { 00:18:31.686 "state": "completed", 00:18:31.686 "digest": "sha512", 00:18:31.686 "dhgroup": "ffdhe2048" 00:18:31.686 } 00:18:31.686 } 00:18:31.686 ]' 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.686 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.948 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.948 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.948 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.948 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.948 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.948 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.948 11:27:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:18:32.895 11:27:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.156 11:27:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:33.156 11:27:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.156 11:27:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.156 11:27:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.156 11:27:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.156 11:27:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.156 11:27:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.156 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.416 00:18:33.416 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.416 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.416 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.676 { 00:18:33.676 "cntlid": 111, 00:18:33.676 "qid": 0, 00:18:33.676 "state": "enabled", 00:18:33.676 "listen_address": { 00:18:33.676 "trtype": "RDMA", 00:18:33.676 "adrfam": "IPv4", 00:18:33.676 "traddr": "192.168.100.8", 00:18:33.676 "trsvcid": "4420" 00:18:33.676 }, 00:18:33.676 "peer_address": { 00:18:33.676 "trtype": "RDMA", 00:18:33.676 "adrfam": "IPv4", 00:18:33.676 "traddr": "192.168.100.8", 00:18:33.676 "trsvcid": "44995" 00:18:33.676 }, 00:18:33.676 "auth": { 00:18:33.676 "state": "completed", 00:18:33.676 "digest": "sha512", 00:18:33.676 "dhgroup": "ffdhe2048" 00:18:33.676 } 00:18:33.676 } 00:18:33.676 ]' 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.676 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.937 11:27:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:34.877 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.136 11:27:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.396 00:18:35.396 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.396 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.396 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.396 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.396 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.396 11:27:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.396 11:27:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.656 { 00:18:35.656 "cntlid": 113, 00:18:35.656 "qid": 0, 00:18:35.656 "state": "enabled", 00:18:35.656 "listen_address": { 00:18:35.656 "trtype": "RDMA", 00:18:35.656 "adrfam": "IPv4", 00:18:35.656 "traddr": "192.168.100.8", 00:18:35.656 "trsvcid": "4420" 00:18:35.656 }, 00:18:35.656 "peer_address": { 00:18:35.656 "trtype": "RDMA", 00:18:35.656 "adrfam": "IPv4", 00:18:35.656 "traddr": "192.168.100.8", 00:18:35.656 "trsvcid": "46981" 00:18:35.656 }, 00:18:35.656 "auth": { 00:18:35.656 "state": "completed", 00:18:35.656 "digest": "sha512", 00:18:35.656 "dhgroup": "ffdhe3072" 00:18:35.656 } 00:18:35.656 } 00:18:35.656 ]' 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.656 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.916 11:27:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.859 11:27:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.120 11:27:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.120 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.120 11:27:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.120 00:18:37.120 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.120 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.120 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.404 { 00:18:37.404 "cntlid": 115, 00:18:37.404 "qid": 0, 00:18:37.404 "state": "enabled", 00:18:37.404 "listen_address": { 00:18:37.404 "trtype": "RDMA", 00:18:37.404 "adrfam": "IPv4", 00:18:37.404 "traddr": "192.168.100.8", 00:18:37.404 "trsvcid": "4420" 00:18:37.404 }, 00:18:37.404 "peer_address": { 00:18:37.404 "trtype": "RDMA", 00:18:37.404 "adrfam": "IPv4", 00:18:37.404 "traddr": "192.168.100.8", 00:18:37.404 "trsvcid": "44878" 00:18:37.404 }, 00:18:37.404 "auth": { 00:18:37.404 "state": "completed", 00:18:37.404 "digest": "sha512", 00:18:37.404 "dhgroup": "ffdhe3072" 00:18:37.404 } 00:18:37.404 } 00:18:37.404 ]' 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.404 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.664 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.664 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.664 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.665 11:27:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:18:38.604 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.604 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:38.604 11:27:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.604 11:27:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.604 11:27:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.604 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.604 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:38.604 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.865 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.125 00:18:39.125 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.125 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.125 11:27:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.386 { 00:18:39.386 "cntlid": 117, 00:18:39.386 "qid": 0, 00:18:39.386 "state": "enabled", 00:18:39.386 "listen_address": { 00:18:39.386 "trtype": "RDMA", 00:18:39.386 "adrfam": "IPv4", 00:18:39.386 "traddr": "192.168.100.8", 00:18:39.386 "trsvcid": "4420" 00:18:39.386 }, 00:18:39.386 "peer_address": { 00:18:39.386 "trtype": "RDMA", 00:18:39.386 "adrfam": "IPv4", 00:18:39.386 "traddr": "192.168.100.8", 00:18:39.386 "trsvcid": "60575" 00:18:39.386 }, 00:18:39.386 "auth": { 00:18:39.386 "state": "completed", 00:18:39.386 "digest": "sha512", 00:18:39.386 "dhgroup": "ffdhe3072" 00:18:39.386 } 00:18:39.386 } 00:18:39.386 ]' 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.386 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.645 11:27:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:18:40.585 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.585 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:40.585 11:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.585 11:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.585 11:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.585 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.585 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.585 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.846 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.107 00:18:41.107 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.107 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.107 11:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.107 { 00:18:41.107 "cntlid": 119, 00:18:41.107 "qid": 0, 00:18:41.107 "state": "enabled", 00:18:41.107 "listen_address": { 00:18:41.107 "trtype": "RDMA", 00:18:41.107 "adrfam": "IPv4", 00:18:41.107 "traddr": "192.168.100.8", 00:18:41.107 "trsvcid": "4420" 00:18:41.107 }, 00:18:41.107 "peer_address": { 00:18:41.107 "trtype": "RDMA", 00:18:41.107 "adrfam": "IPv4", 00:18:41.107 "traddr": "192.168.100.8", 00:18:41.107 "trsvcid": "48334" 00:18:41.107 }, 00:18:41.107 "auth": { 00:18:41.107 "state": "completed", 00:18:41.107 "digest": "sha512", 00:18:41.107 "dhgroup": "ffdhe3072" 00:18:41.107 } 00:18:41.107 } 00:18:41.107 ]' 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.107 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.367 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.367 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.367 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.367 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.367 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.367 11:27:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:18:42.309 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.569 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.830 00:18:42.830 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.830 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.830 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.090 { 00:18:43.090 "cntlid": 121, 00:18:43.090 "qid": 0, 00:18:43.090 "state": "enabled", 00:18:43.090 "listen_address": { 00:18:43.090 "trtype": "RDMA", 00:18:43.090 "adrfam": "IPv4", 00:18:43.090 "traddr": "192.168.100.8", 00:18:43.090 "trsvcid": "4420" 00:18:43.090 }, 00:18:43.090 "peer_address": { 00:18:43.090 "trtype": "RDMA", 00:18:43.090 "adrfam": "IPv4", 00:18:43.090 "traddr": "192.168.100.8", 00:18:43.090 "trsvcid": "35754" 00:18:43.090 }, 00:18:43.090 "auth": { 00:18:43.090 "state": "completed", 00:18:43.090 "digest": "sha512", 00:18:43.090 "dhgroup": "ffdhe4096" 00:18:43.090 } 00:18:43.090 } 00:18:43.090 ]' 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.090 11:27:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.090 11:27:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.090 11:27:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.090 11:27:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.350 11:27:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:18:44.290 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.290 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:44.290 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.290 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.290 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.290 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.290 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.290 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.549 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.809 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.809 { 00:18:44.809 "cntlid": 123, 00:18:44.809 "qid": 0, 00:18:44.809 "state": "enabled", 00:18:44.809 "listen_address": { 00:18:44.809 "trtype": "RDMA", 00:18:44.809 "adrfam": "IPv4", 00:18:44.809 "traddr": "192.168.100.8", 00:18:44.809 "trsvcid": "4420" 00:18:44.809 }, 00:18:44.809 "peer_address": { 00:18:44.809 "trtype": "RDMA", 00:18:44.809 "adrfam": "IPv4", 00:18:44.809 "traddr": "192.168.100.8", 00:18:44.809 "trsvcid": "48334" 00:18:44.809 }, 00:18:44.809 "auth": { 00:18:44.809 "state": "completed", 00:18:44.809 "digest": "sha512", 00:18:44.809 "dhgroup": "ffdhe4096" 00:18:44.809 } 00:18:44.809 } 00:18:44.809 ]' 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.809 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.070 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.070 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.070 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.070 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.070 11:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.070 11:27:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:18:46.010 11:27:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.270 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.530 00:18:46.530 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.530 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.530 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.790 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.790 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.790 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.790 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.790 11:27:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.790 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.790 { 00:18:46.790 "cntlid": 125, 00:18:46.790 "qid": 0, 00:18:46.790 "state": "enabled", 00:18:46.790 "listen_address": { 00:18:46.790 "trtype": "RDMA", 00:18:46.790 "adrfam": "IPv4", 00:18:46.790 "traddr": "192.168.100.8", 00:18:46.790 "trsvcid": "4420" 00:18:46.790 }, 00:18:46.790 "peer_address": { 00:18:46.790 "trtype": "RDMA", 00:18:46.790 "adrfam": "IPv4", 00:18:46.790 "traddr": "192.168.100.8", 00:18:46.791 "trsvcid": "54657" 00:18:46.791 }, 00:18:46.791 "auth": { 00:18:46.791 "state": "completed", 00:18:46.791 "digest": "sha512", 00:18:46.791 "dhgroup": "ffdhe4096" 00:18:46.791 } 00:18:46.791 } 00:18:46.791 ]' 00:18:46.791 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.791 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.791 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.791 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.791 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.791 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.791 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.791 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.050 11:27:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:18:47.991 11:27:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.991 11:27:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:47.991 11:27:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.991 11:27:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.991 11:27:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.991 11:27:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.991 11:27:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.991 11:27:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.250 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.511 00:18:48.511 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.511 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.511 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.771 { 00:18:48.771 "cntlid": 127, 00:18:48.771 "qid": 0, 00:18:48.771 "state": "enabled", 00:18:48.771 "listen_address": { 00:18:48.771 "trtype": "RDMA", 00:18:48.771 "adrfam": "IPv4", 00:18:48.771 "traddr": "192.168.100.8", 00:18:48.771 "trsvcid": "4420" 00:18:48.771 }, 00:18:48.771 "peer_address": { 00:18:48.771 "trtype": "RDMA", 00:18:48.771 "adrfam": "IPv4", 00:18:48.771 "traddr": "192.168.100.8", 00:18:48.771 "trsvcid": "51173" 00:18:48.771 }, 00:18:48.771 "auth": { 00:18:48.771 "state": "completed", 00:18:48.771 "digest": "sha512", 00:18:48.771 "dhgroup": "ffdhe4096" 00:18:48.771 } 00:18:48.771 } 00:18:48.771 ]' 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.771 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.031 11:27:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:18:49.601 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.861 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:49.861 11:27:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.861 11:27:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.861 11:27:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.861 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.861 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.861 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:49.861 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.122 11:27:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.382 00:18:50.382 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.382 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.382 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.642 { 00:18:50.642 "cntlid": 129, 00:18:50.642 "qid": 0, 00:18:50.642 "state": "enabled", 00:18:50.642 "listen_address": { 00:18:50.642 "trtype": "RDMA", 00:18:50.642 "adrfam": "IPv4", 00:18:50.642 "traddr": "192.168.100.8", 00:18:50.642 "trsvcid": "4420" 00:18:50.642 }, 00:18:50.642 "peer_address": { 00:18:50.642 "trtype": "RDMA", 00:18:50.642 "adrfam": "IPv4", 00:18:50.642 "traddr": "192.168.100.8", 00:18:50.642 "trsvcid": "55487" 00:18:50.642 }, 00:18:50.642 "auth": { 00:18:50.642 "state": "completed", 00:18:50.642 "digest": "sha512", 00:18:50.642 "dhgroup": "ffdhe6144" 00:18:50.642 } 00:18:50.642 } 00:18:50.642 ]' 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.642 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.902 11:27:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:18:51.841 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.841 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:51.841 11:27:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.841 11:27:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.841 11:27:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.841 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.841 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.841 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.129 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:52.129 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.129 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.129 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.129 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.129 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.130 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.130 11:27:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.130 11:27:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.130 11:27:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.130 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.130 11:27:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.389 00:18:52.389 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.389 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.389 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.649 { 00:18:52.649 "cntlid": 131, 00:18:52.649 "qid": 0, 00:18:52.649 "state": "enabled", 00:18:52.649 "listen_address": { 00:18:52.649 "trtype": "RDMA", 00:18:52.649 "adrfam": "IPv4", 00:18:52.649 "traddr": "192.168.100.8", 00:18:52.649 "trsvcid": "4420" 00:18:52.649 }, 00:18:52.649 "peer_address": { 00:18:52.649 "trtype": "RDMA", 00:18:52.649 "adrfam": "IPv4", 00:18:52.649 "traddr": "192.168.100.8", 00:18:52.649 "trsvcid": "54157" 00:18:52.649 }, 00:18:52.649 "auth": { 00:18:52.649 "state": "completed", 00:18:52.649 "digest": "sha512", 00:18:52.649 "dhgroup": "ffdhe6144" 00:18:52.649 } 00:18:52.649 } 00:18:52.649 ]' 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.649 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.909 11:27:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:18:53.847 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.847 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:53.847 11:27:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.847 11:27:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.847 11:27:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.848 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.848 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.848 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.107 11:27:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.366 00:18:54.366 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.366 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.366 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.626 { 00:18:54.626 "cntlid": 133, 00:18:54.626 "qid": 0, 00:18:54.626 "state": "enabled", 00:18:54.626 "listen_address": { 00:18:54.626 "trtype": "RDMA", 00:18:54.626 "adrfam": "IPv4", 00:18:54.626 "traddr": "192.168.100.8", 00:18:54.626 "trsvcid": "4420" 00:18:54.626 }, 00:18:54.626 "peer_address": { 00:18:54.626 "trtype": "RDMA", 00:18:54.626 "adrfam": "IPv4", 00:18:54.626 "traddr": "192.168.100.8", 00:18:54.626 "trsvcid": "51204" 00:18:54.626 }, 00:18:54.626 "auth": { 00:18:54.626 "state": "completed", 00:18:54.626 "digest": "sha512", 00:18:54.626 "dhgroup": "ffdhe6144" 00:18:54.626 } 00:18:54.626 } 00:18:54.626 ]' 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.626 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.887 11:27:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:18:55.826 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.826 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:55.826 11:27:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.826 11:27:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.826 11:27:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.826 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.826 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.826 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.086 11:27:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.346 00:18:56.346 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.346 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.346 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.607 { 00:18:56.607 "cntlid": 135, 00:18:56.607 "qid": 0, 00:18:56.607 "state": "enabled", 00:18:56.607 "listen_address": { 00:18:56.607 "trtype": "RDMA", 00:18:56.607 "adrfam": "IPv4", 00:18:56.607 "traddr": "192.168.100.8", 00:18:56.607 "trsvcid": "4420" 00:18:56.607 }, 00:18:56.607 "peer_address": { 00:18:56.607 "trtype": "RDMA", 00:18:56.607 "adrfam": "IPv4", 00:18:56.607 "traddr": "192.168.100.8", 00:18:56.607 "trsvcid": "41851" 00:18:56.607 }, 00:18:56.607 "auth": { 00:18:56.607 "state": "completed", 00:18:56.607 "digest": "sha512", 00:18:56.607 "dhgroup": "ffdhe6144" 00:18:56.607 } 00:18:56.607 } 00:18:56.607 ]' 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.607 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.868 11:27:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:18:57.807 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.807 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:57.807 11:27:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.807 11:27:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.807 11:27:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.807 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.807 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.808 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:57.808 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.067 11:27:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.636 00:18:58.636 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.636 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.636 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.636 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.636 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.637 11:27:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.637 11:27:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.637 11:27:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.637 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.637 { 00:18:58.637 "cntlid": 137, 00:18:58.637 "qid": 0, 00:18:58.637 "state": "enabled", 00:18:58.637 "listen_address": { 00:18:58.637 "trtype": "RDMA", 00:18:58.637 "adrfam": "IPv4", 00:18:58.637 "traddr": "192.168.100.8", 00:18:58.637 "trsvcid": "4420" 00:18:58.637 }, 00:18:58.637 "peer_address": { 00:18:58.637 "trtype": "RDMA", 00:18:58.637 "adrfam": "IPv4", 00:18:58.637 "traddr": "192.168.100.8", 00:18:58.637 "trsvcid": "41472" 00:18:58.637 }, 00:18:58.637 "auth": { 00:18:58.637 "state": "completed", 00:18:58.637 "digest": "sha512", 00:18:58.637 "dhgroup": "ffdhe8192" 00:18:58.637 } 00:18:58.637 } 00:18:58.637 ]' 00:18:58.637 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.896 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.896 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.896 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.896 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.896 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.896 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.896 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.156 11:27:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:19:00.097 11:27:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.097 11:27:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:00.097 11:27:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.097 11:27:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.097 11:27:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.097 11:27:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.097 11:27:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.097 11:27:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.097 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.668 00:19:00.668 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.668 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.668 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.928 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.928 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.929 { 00:19:00.929 "cntlid": 139, 00:19:00.929 "qid": 0, 00:19:00.929 "state": "enabled", 00:19:00.929 "listen_address": { 00:19:00.929 "trtype": "RDMA", 00:19:00.929 "adrfam": "IPv4", 00:19:00.929 "traddr": "192.168.100.8", 00:19:00.929 "trsvcid": "4420" 00:19:00.929 }, 00:19:00.929 "peer_address": { 00:19:00.929 "trtype": "RDMA", 00:19:00.929 "adrfam": "IPv4", 00:19:00.929 "traddr": "192.168.100.8", 00:19:00.929 "trsvcid": "49453" 00:19:00.929 }, 00:19:00.929 "auth": { 00:19:00.929 "state": "completed", 00:19:00.929 "digest": "sha512", 00:19:00.929 "dhgroup": "ffdhe8192" 00:19:00.929 } 00:19:00.929 } 00:19:00.929 ]' 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.929 11:27:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.189 11:27:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:NDdmNDFmZDNhMTVkZmRkMDQ2Zjg3NWIxZjJjYjM5NTm8RK2q: --dhchap-ctrl-secret DHHC-1:02:YzVmNjllZTY0MmQ4ZjFjNzVlZGFlZDBiMzE2YTU1MDY4ZmM3NmY0NTVmZTgxZDNieRjndw==: 00:19:02.128 11:27:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.128 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:02.128 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.128 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.128 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.128 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.128 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.129 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.388 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:02.388 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.388 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.388 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.388 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.388 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.388 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.389 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.389 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.389 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.389 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.389 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.959 00:19:02.959 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.959 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.959 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.219 { 00:19:03.219 "cntlid": 141, 00:19:03.219 "qid": 0, 00:19:03.219 "state": "enabled", 00:19:03.219 "listen_address": { 00:19:03.219 "trtype": "RDMA", 00:19:03.219 "adrfam": "IPv4", 00:19:03.219 "traddr": "192.168.100.8", 00:19:03.219 "trsvcid": "4420" 00:19:03.219 }, 00:19:03.219 "peer_address": { 00:19:03.219 "trtype": "RDMA", 00:19:03.219 "adrfam": "IPv4", 00:19:03.219 "traddr": "192.168.100.8", 00:19:03.219 "trsvcid": "43851" 00:19:03.219 }, 00:19:03.219 "auth": { 00:19:03.219 "state": "completed", 00:19:03.219 "digest": "sha512", 00:19:03.219 "dhgroup": "ffdhe8192" 00:19:03.219 } 00:19:03.219 } 00:19:03.219 ]' 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.219 11:27:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.219 11:27:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.219 11:27:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.219 11:27:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.219 11:27:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.219 11:27:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.479 11:27:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:YjAyOWFiNDM1MmFlNjQ1NmRmOGVmOGJiNWJiOWU2NGM0MmI0N2E0ODYzOTNhNDA1TSuKxA==: --dhchap-ctrl-secret DHHC-1:01:ZGI1YWQ5N2Q3NTdiNzI0ZWU0MDIyMTZhNTFiYjgzMzKkUHJB: 00:19:04.420 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.420 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:04.420 11:27:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.420 11:27:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.420 11:27:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.420 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.420 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.420 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.680 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.251 00:19:05.251 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.251 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.251 11:27:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.252 { 00:19:05.252 "cntlid": 143, 00:19:05.252 "qid": 0, 00:19:05.252 "state": "enabled", 00:19:05.252 "listen_address": { 00:19:05.252 "trtype": "RDMA", 00:19:05.252 "adrfam": "IPv4", 00:19:05.252 "traddr": "192.168.100.8", 00:19:05.252 "trsvcid": "4420" 00:19:05.252 }, 00:19:05.252 "peer_address": { 00:19:05.252 "trtype": "RDMA", 00:19:05.252 "adrfam": "IPv4", 00:19:05.252 "traddr": "192.168.100.8", 00:19:05.252 "trsvcid": "55237" 00:19:05.252 }, 00:19:05.252 "auth": { 00:19:05.252 "state": "completed", 00:19:05.252 "digest": "sha512", 00:19:05.252 "dhgroup": "ffdhe8192" 00:19:05.252 } 00:19:05.252 } 00:19:05.252 ]' 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.252 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.512 11:27:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.452 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.712 11:27:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.282 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.282 { 00:19:07.282 "cntlid": 145, 00:19:07.282 "qid": 0, 00:19:07.282 "state": "enabled", 00:19:07.282 "listen_address": { 00:19:07.282 "trtype": "RDMA", 00:19:07.282 "adrfam": "IPv4", 00:19:07.282 "traddr": "192.168.100.8", 00:19:07.282 "trsvcid": "4420" 00:19:07.282 }, 00:19:07.282 "peer_address": { 00:19:07.282 "trtype": "RDMA", 00:19:07.282 "adrfam": "IPv4", 00:19:07.282 "traddr": "192.168.100.8", 00:19:07.282 "trsvcid": "48984" 00:19:07.282 }, 00:19:07.282 "auth": { 00:19:07.282 "state": "completed", 00:19:07.282 "digest": "sha512", 00:19:07.282 "dhgroup": "ffdhe8192" 00:19:07.282 } 00:19:07.282 } 00:19:07.282 ]' 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.282 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.585 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.585 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.585 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.585 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.585 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.585 11:27:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:OGEzZGVmZWIyOWE2Yzg3MWExOWU5YWJlYmM5MzI5MThlNWVlMGRiNjk5MGQxMDVmZCSLCg==: --dhchap-ctrl-secret DHHC-1:03:NDJiNWEyZGUxN2IwZTIyM2MxZGI5OWYyOGM5ZWM1ZGUyOTJlYzhhNTc4NWQ0ZGQ4ZTU5M2E5ZDU1YjU5Yzg3MU4mqFQ=: 00:19:08.533 11:27:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.533 11:27:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:08.533 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.533 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:08.793 11:27:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:40.896 request: 00:19:40.896 { 00:19:40.896 "name": "nvme0", 00:19:40.896 "trtype": "rdma", 00:19:40.896 "traddr": "192.168.100.8", 00:19:40.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:40.896 "dhchap_key": "key2", 00:19:40.896 "method": "bdev_nvme_attach_controller", 00:19:40.896 "req_id": 1 00:19:40.896 } 00:19:40.896 Got JSON-RPC error response 00:19:40.896 response: 00:19:40.896 { 00:19:40.896 "code": -5, 00:19:40.896 "message": "Input/output error" 00:19:40.896 } 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.896 request: 00:19:40.896 { 00:19:40.896 "name": "nvme0", 00:19:40.896 "trtype": "rdma", 00:19:40.896 "traddr": "192.168.100.8", 00:19:40.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:40.896 "dhchap_key": "key1", 00:19:40.896 "dhchap_ctrlr_key": "ckey2", 00:19:40.896 "method": "bdev_nvme_attach_controller", 00:19:40.896 "req_id": 1 00:19:40.896 } 00:19:40.896 Got JSON-RPC error response 00:19:40.896 response: 00:19:40.896 { 00:19:40.896 "code": -5, 00:19:40.896 "message": "Input/output error" 00:19:40.896 } 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.896 11:28:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.004 request: 00:20:13.004 { 00:20:13.004 "name": "nvme0", 00:20:13.004 "trtype": "rdma", 00:20:13.004 "traddr": "192.168.100.8", 00:20:13.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:13.004 "adrfam": "ipv4", 00:20:13.004 "trsvcid": "4420", 00:20:13.004 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:13.004 "dhchap_key": "key1", 00:20:13.004 "dhchap_ctrlr_key": "ckey1", 00:20:13.004 "method": "bdev_nvme_attach_controller", 00:20:13.004 "req_id": 1 00:20:13.004 } 00:20:13.004 Got JSON-RPC error response 00:20:13.004 response: 00:20:13.004 { 00:20:13.004 "code": -5, 00:20:13.004 "message": "Input/output error" 00:20:13.004 } 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3590960 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3590960 ']' 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3590960 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3590960 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3590960' 00:20:13.004 killing process with pid 3590960 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3590960 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3590960 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3632656 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3632656 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3632656 ']' 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:13.004 11:28:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3632656 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3632656 ']' 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.004 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.005 11:28:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.005 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.005 { 00:20:13.005 "cntlid": 1, 00:20:13.005 "qid": 0, 00:20:13.005 "state": "enabled", 00:20:13.005 "listen_address": { 00:20:13.005 "trtype": "RDMA", 00:20:13.005 "adrfam": "IPv4", 00:20:13.005 "traddr": "192.168.100.8", 00:20:13.005 "trsvcid": "4420" 00:20:13.005 }, 00:20:13.005 "peer_address": { 00:20:13.005 "trtype": "RDMA", 00:20:13.005 "adrfam": "IPv4", 00:20:13.005 "traddr": "192.168.100.8", 00:20:13.005 "trsvcid": "41456" 00:20:13.005 }, 00:20:13.005 "auth": { 00:20:13.005 "state": "completed", 00:20:13.005 "digest": "sha512", 00:20:13.005 "dhgroup": "ffdhe8192" 00:20:13.005 } 00:20:13.005 } 00:20:13.005 ]' 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.005 11:28:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:NjIyMjUzZTIxNGUzNDNhNDJkNTYzOWE5NmE0OGJlOTAwZTc4MDgzOTZmZjIxZTMzMDEwMDBiYWRmNjc1NTY0YZfzqcw=: 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.946 11:28:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.100 request: 00:20:46.100 { 00:20:46.100 "name": "nvme0", 00:20:46.100 "trtype": "rdma", 00:20:46.100 "traddr": "192.168.100.8", 00:20:46.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:46.100 "adrfam": "ipv4", 00:20:46.100 "trsvcid": "4420", 00:20:46.100 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:46.100 "dhchap_key": "key3", 00:20:46.100 "method": "bdev_nvme_attach_controller", 00:20:46.100 "req_id": 1 00:20:46.100 } 00:20:46.100 Got JSON-RPC error response 00:20:46.100 response: 00:20:46.100 { 00:20:46.100 "code": -5, 00:20:46.100 "message": "Input/output error" 00:20:46.100 } 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.100 11:29:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.218 request: 00:21:18.218 { 00:21:18.218 "name": "nvme0", 00:21:18.218 "trtype": "rdma", 00:21:18.218 "traddr": "192.168.100.8", 00:21:18.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:18.218 "adrfam": "ipv4", 00:21:18.218 "trsvcid": "4420", 00:21:18.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:18.218 "dhchap_key": "key3", 00:21:18.218 "method": "bdev_nvme_attach_controller", 00:21:18.218 "req_id": 1 00:21:18.218 } 00:21:18.218 Got JSON-RPC error response 00:21:18.218 response: 00:21:18.218 { 00:21:18.218 "code": -5, 00:21:18.218 "message": "Input/output error" 00:21:18.218 } 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.218 request: 00:21:18.218 { 00:21:18.218 "name": "nvme0", 00:21:18.218 "trtype": "rdma", 00:21:18.218 "traddr": "192.168.100.8", 00:21:18.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:18.218 "adrfam": "ipv4", 00:21:18.218 "trsvcid": "4420", 00:21:18.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:18.218 "dhchap_key": "key0", 00:21:18.218 "dhchap_ctrlr_key": "key1", 00:21:18.218 "method": "bdev_nvme_attach_controller", 00:21:18.218 "req_id": 1 00:21:18.218 } 00:21:18.218 Got JSON-RPC error response 00:21:18.218 response: 00:21:18.218 { 00:21:18.218 "code": -5, 00:21:18.218 "message": "Input/output error" 00:21:18.218 } 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:18.218 11:29:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:18.218 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:18.218 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:18.218 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:18.219 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3591298 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3591298 ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3591298 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3591298 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3591298' 00:21:18.219 killing process with pid 3591298 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3591298 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3591298 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:18.219 rmmod nvme_rdma 00:21:18.219 rmmod nvme_fabrics 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3632656 ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3632656 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3632656 ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3632656 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3632656 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3632656' 00:21:18.219 killing process with pid 3632656 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3632656 00:21:18.219 11:29:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3632656 00:21:18.219 11:29:45 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.219 11:29:45 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:18.219 11:29:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.6qB /tmp/spdk.key-sha256.6OY /tmp/spdk.key-sha384.2FR /tmp/spdk.key-sha512.gyw /tmp/spdk.key-sha512.2UW /tmp/spdk.key-sha384.qyi /tmp/spdk.key-sha256.rzV '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:21:18.219 00:21:18.219 real 4m39.019s 00:21:18.219 user 9m53.198s 00:21:18.219 sys 0m19.671s 00:21:18.219 11:29:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:18.219 11:29:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.219 ************************************ 00:21:18.219 END TEST nvmf_auth_target 00:21:18.219 ************************************ 00:21:18.219 11:29:45 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:21:18.219 11:29:45 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:18.219 11:29:45 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:18.219 11:29:45 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:21:18.219 11:29:45 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:21:18.219 11:29:45 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:21:18.219 11:29:45 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:18.219 11:29:45 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:18.219 11:29:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:18.219 ************************************ 00:21:18.219 START TEST nvmf_device_removal 00:21:18.219 ************************************ 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1124 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:21:18.219 * Looking for test storage... 00:21:18.219 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:21:18.219 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:21:18.220 #define SPDK_CONFIG_H 00:21:18.220 #define SPDK_CONFIG_APPS 1 00:21:18.220 #define SPDK_CONFIG_ARCH native 00:21:18.220 #undef SPDK_CONFIG_ASAN 00:21:18.220 #undef SPDK_CONFIG_AVAHI 00:21:18.220 #undef SPDK_CONFIG_CET 00:21:18.220 #define SPDK_CONFIG_COVERAGE 1 00:21:18.220 #define SPDK_CONFIG_CROSS_PREFIX 00:21:18.220 #undef SPDK_CONFIG_CRYPTO 00:21:18.220 #undef SPDK_CONFIG_CRYPTO_MLX5 00:21:18.220 #undef SPDK_CONFIG_CUSTOMOCF 00:21:18.220 #undef SPDK_CONFIG_DAOS 00:21:18.220 #define SPDK_CONFIG_DAOS_DIR 00:21:18.220 #define SPDK_CONFIG_DEBUG 1 00:21:18.220 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:21:18.220 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:21:18.220 #define SPDK_CONFIG_DPDK_INC_DIR 00:21:18.220 #define SPDK_CONFIG_DPDK_LIB_DIR 00:21:18.220 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:21:18.220 #undef SPDK_CONFIG_DPDK_UADK 00:21:18.220 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:21:18.220 #define SPDK_CONFIG_EXAMPLES 1 00:21:18.220 #undef SPDK_CONFIG_FC 00:21:18.220 #define SPDK_CONFIG_FC_PATH 00:21:18.220 #define SPDK_CONFIG_FIO_PLUGIN 1 00:21:18.220 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:21:18.220 #undef SPDK_CONFIG_FUSE 00:21:18.220 #undef SPDK_CONFIG_FUZZER 00:21:18.220 #define SPDK_CONFIG_FUZZER_LIB 00:21:18.220 #undef SPDK_CONFIG_GOLANG 00:21:18.220 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:21:18.220 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:21:18.220 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:21:18.220 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:21:18.220 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:21:18.220 #undef SPDK_CONFIG_HAVE_LIBBSD 00:21:18.220 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:21:18.220 #define SPDK_CONFIG_IDXD 1 00:21:18.220 #define SPDK_CONFIG_IDXD_KERNEL 1 00:21:18.220 #undef SPDK_CONFIG_IPSEC_MB 00:21:18.220 #define SPDK_CONFIG_IPSEC_MB_DIR 00:21:18.220 #define SPDK_CONFIG_ISAL 1 00:21:18.220 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:21:18.220 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:21:18.220 #define SPDK_CONFIG_LIBDIR 00:21:18.220 #undef SPDK_CONFIG_LTO 00:21:18.220 #define SPDK_CONFIG_MAX_LCORES 00:21:18.220 #define SPDK_CONFIG_NVME_CUSE 1 00:21:18.220 #undef SPDK_CONFIG_OCF 00:21:18.220 #define SPDK_CONFIG_OCF_PATH 00:21:18.220 #define SPDK_CONFIG_OPENSSL_PATH 00:21:18.220 #undef SPDK_CONFIG_PGO_CAPTURE 00:21:18.220 #define SPDK_CONFIG_PGO_DIR 00:21:18.220 #undef SPDK_CONFIG_PGO_USE 00:21:18.220 #define SPDK_CONFIG_PREFIX /usr/local 00:21:18.220 #undef SPDK_CONFIG_RAID5F 00:21:18.220 #undef SPDK_CONFIG_RBD 00:21:18.220 #define SPDK_CONFIG_RDMA 1 00:21:18.220 #define SPDK_CONFIG_RDMA_PROV verbs 00:21:18.220 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:21:18.220 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:21:18.220 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:21:18.220 #define SPDK_CONFIG_SHARED 1 00:21:18.220 #undef SPDK_CONFIG_SMA 00:21:18.220 #define SPDK_CONFIG_TESTS 1 00:21:18.220 #undef SPDK_CONFIG_TSAN 00:21:18.220 #define SPDK_CONFIG_UBLK 1 00:21:18.220 #define SPDK_CONFIG_UBSAN 1 00:21:18.220 #undef SPDK_CONFIG_UNIT_TESTS 00:21:18.220 #undef SPDK_CONFIG_URING 00:21:18.220 #define SPDK_CONFIG_URING_PATH 00:21:18.220 #undef SPDK_CONFIG_URING_ZNS 00:21:18.220 #undef SPDK_CONFIG_USDT 00:21:18.220 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:21:18.220 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:21:18.220 #undef SPDK_CONFIG_VFIO_USER 00:21:18.220 #define SPDK_CONFIG_VFIO_USER_DIR 00:21:18.220 #define SPDK_CONFIG_VHOST 1 00:21:18.220 #define SPDK_CONFIG_VIRTIO 1 00:21:18.220 #undef SPDK_CONFIG_VTUNE 00:21:18.220 #define SPDK_CONFIG_VTUNE_DIR 00:21:18.220 #define SPDK_CONFIG_WERROR 1 00:21:18.220 #define SPDK_CONFIG_WPDK_DIR 00:21:18.220 #undef SPDK_CONFIG_XNVME 00:21:18.220 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.220 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # : 1 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # : 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # : 1 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # : 1 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # : rdma 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # : 1 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # : 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # : 0 00:21:18.221 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # : 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # : true 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # : mlx5 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # : 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # : 0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@200 -- # cat 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # export valgrind= 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # valgrind= 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # uname -s 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:21:18.222 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKE=make 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # TEST_MODE= 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # for i in "$@" 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@301 -- # case "$i" in 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # [[ -z 3645439 ]] 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # kill -0 3645439 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@331 -- # local mount target_dir 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.QVuHNH 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QVuHNH/tests/target /tmp/spdk.QVuHNH 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # df -T 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=959328256 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4325101568 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=122953224192 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371025408 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=6417801216 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=64675385344 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685510656 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=10125312 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=25850847232 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874206720 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=23359488 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=394240 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=109568 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=64685006848 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685514752 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=507904 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:21:18.223 * Looking for test storage... 00:21:18.223 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # local target_space new_size 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # mount=/ 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # target_space=122953224192 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # new_size=8632393728 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:18.224 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@389 -- # return 0 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1681 -- # set -o errtrace 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1686 -- # true 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1688 -- # xtrace_fd 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:21:18.224 11:29:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:23.515 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:23.515 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:23.515 Found net devices under 0000:98:00.0: mlx_0_0 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.515 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:23.516 Found net devices under 0000:98:00.1: mlx_0_1 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:23.516 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:23.516 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:21:23.516 altname enp152s0f0np0 00:21:23.516 altname ens817f0np0 00:21:23.516 inet 192.168.100.8/24 scope global mlx_0_0 00:21:23.516 valid_lft forever preferred_lft forever 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:23.516 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:23.516 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:21:23.516 altname enp152s0f1np1 00:21:23.516 altname ens817f1np1 00:21:23.516 inet 192.168.100.9/24 scope global mlx_0_1 00:21:23.516 valid_lft forever preferred_lft forever 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:23.516 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:23.777 192.168.100.9' 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:23.777 192.168.100.9' 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:23.777 192.168.100.9' 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:21:23.777 11:29:52 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:21:23.778 ************************************ 00:21:23.778 START TEST nvmf_device_removal_pci_remove_no_srq 00:21:23.778 ************************************ 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1124 -- # test_remove_and_rescan --no-srq 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=3649105 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 3649105 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 3649105 ']' 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:23.778 11:29:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:23.778 [2024-06-10 11:29:52.652993] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:23.778 [2024-06-10 11:29:52.653067] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.778 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.778 [2024-06-10 11:29:52.724380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:24.038 [2024-06-10 11:29:52.798749] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.038 [2024-06-10 11:29:52.798790] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.038 [2024-06-10 11:29:52.798797] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.038 [2024-06-10 11:29:52.798807] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.038 [2024-06-10 11:29:52.798813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.038 [2024-06-10 11:29:52.798989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.038 [2024-06-10 11:29:52.798991] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.609 [2024-06-10 11:29:53.500350] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x980a20/0x984f10) succeed. 00:21:24.609 [2024-06-10 11:29:53.513613] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x981f20/0x9c65a0) succeed. 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:21:24.609 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:21:24.610 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.610 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.870 [2024-06-10 11:29:53.656706] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:21:24.870 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:24.871 [2024-06-10 11:29:53.741332] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=3649469 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 3649469 /var/tmp/bdevperf.sock 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 3649469 ']' 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:24.871 11:29:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:21:25.810 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:25.811 Nvme_mlx_0_0n1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.811 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:26.071 Nvme_mlx_0_1n1 00:21:26.071 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.071 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3649530 00:21:26.071 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:21:26.071 11:29:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0/infiniband 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.355 mlx5_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:21:31.355 11:29:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:21:31.355 [2024-06-10 11:29:59.940651] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:21:31.355 [2024-06-10 11:29:59.941655] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:21:31.355 [2024-06-10 11:29:59.943562] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:31.355 [2024-06-10 11:29:59.943585] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 34 00:21:31.355 [2024-06-10 11:29:59.943591] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:21:31.355 [2024-06-10 11:29:59.943596] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943601] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943608] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943612] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943617] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943620] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943624] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943627] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943631] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943635] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943638] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943642] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943645] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943649] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943653] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943657] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943661] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943664] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943668] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943672] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943676] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.355 [2024-06-10 11:29:59.943679] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.355 [2024-06-10 11:29:59.943683] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943686] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943691] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943695] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943698] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943702] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943706] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943710] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943713] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943717] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943720] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943725] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:31.356 [2024-06-10 11:29:59.943729] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943734] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943739] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943744] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943749] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943753] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943759] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943767] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943772] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943777] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943784] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943789] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943794] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943798] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943803] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943807] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943812] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943816] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943821] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943826] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943830] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943833] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943838] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943843] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943848] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943852] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943856] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943860] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943865] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943869] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943874] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943879] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:31.356 [2024-06-10 11:29:59.943884] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:31.356 [2024-06-10 11:29:59.943889] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:21:37.978 11:30:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:21:38.919 [2024-06-10 11:30:07.602220] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x981b50, err 11. Skip rescan. 00:21:38.919 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:21:38.919 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:21:38.920 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0/net 00:21:38.920 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:21:38.920 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:21:38.920 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:21:38.920 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:21:38.920 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:21:38.920 11:30:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:21:39.180 [2024-06-10 11:30:07.963340] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb8a160/0x984f10) succeed. 00:21:39.180 [2024-06-10 11:30:07.963391] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:42.481 [2024-06-10 11:30:11.225984] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:42.481 [2024-06-10 11:30:11.226013] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:21:42.481 [2024-06-10 11:30:11.226024] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:21:42.481 [2024-06-10 11:30:11.226033] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1/infiniband 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.481 mlx5_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:21:42.481 11:30:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:21:42.481 [2024-06-10 11:30:11.390165] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:21:42.481 [2024-06-10 11:30:11.390232] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:42.481 [2024-06-10 11:30:11.396069] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:42.481 [2024-06-10 11:30:11.396098] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 127 00:21:42.481 [2024-06-10 11:30:11.396105] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:21:42.481 [2024-06-10 11:30:11.396112] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.481 [2024-06-10 11:30:11.396117] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.481 [2024-06-10 11:30:11.396123] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.481 [2024-06-10 11:30:11.396128] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.481 [2024-06-10 11:30:11.396133] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.481 [2024-06-10 11:30:11.396139] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.481 [2024-06-10 11:30:11.396144] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.481 [2024-06-10 11:30:11.396149] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.481 [2024-06-10 11:30:11.396154] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.481 [2024-06-10 11:30:11.396159] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396169] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396174] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396179] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396185] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396190] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396195] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396201] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396206] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396212] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396217] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396222] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396228] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396233] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396238] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396243] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396248] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396253] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396258] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396263] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396268] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396274] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396279] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396284] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396289] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396294] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396299] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396304] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396309] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396314] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396319] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396324] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396329] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396335] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396340] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396345] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396350] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396355] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396360] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396365] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396370] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396375] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396380] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396387] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396392] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396397] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396402] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396407] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396412] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396417] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396422] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396427] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396432] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396437] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396442] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396447] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396453] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396459] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396464] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396469] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396474] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396480] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396485] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396490] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396495] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396500] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396505] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396510] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396515] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396521] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396526] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396532] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396537] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396542] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396547] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396552] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396557] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396562] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396567] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396573] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396578] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396583] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396588] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396593] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396599] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396604] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396609] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396614] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396619] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396625] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396630] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.482 [2024-06-10 11:30:11.396635] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396640] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396645] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.482 [2024-06-10 11:30:11.396650] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.482 [2024-06-10 11:30:11.396655] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.482 [2024-06-10 11:30:11.396660] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.396665] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396670] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396676] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396683] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396691] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.396698] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.396705] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396713] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396718] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396724] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396731] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396736] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396742] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396747] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396752] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396757] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396769] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396774] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396779] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396785] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396790] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396795] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396800] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396805] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396811] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396816] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396821] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396826] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396832] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396838] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396844] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396849] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396854] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396859] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396864] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396870] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396875] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396880] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396885] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396890] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396896] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396901] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396906] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396911] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396916] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396921] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396926] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.396931] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.396936] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.396941] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.396947] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.396952] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.396957] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.396962] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.396968] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396973] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396978] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.396983] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.396989] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.396994] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.396999] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397004] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397009] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397014] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397020] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.397025] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.397030] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397035] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397041] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397046] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397051] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397057] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397062] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397067] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397073] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397077] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397082] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397088] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397093] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397098] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397103] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397108] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397114] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397118] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397123] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397128] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397134] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.397140] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.397145] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397150] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397155] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397160] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397166] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.483 [2024-06-10 11:30:11.397171] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.483 [2024-06-10 11:30:11.397176] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397181] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397186] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397191] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397196] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397202] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397206] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397212] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397217] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.483 [2024-06-10 11:30:11.397222] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.483 [2024-06-10 11:30:11.397228] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397233] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397238] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397244] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397249] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.484 [2024-06-10 11:30:11.397255] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.484 [2024-06-10 11:30:11.397260] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397265] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397270] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.484 [2024-06-10 11:30:11.397276] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.484 [2024-06-10 11:30:11.397282] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397287] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397292] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397297] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397303] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.484 [2024-06-10 11:30:11.397307] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.484 [2024-06-10 11:30:11.397313] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397318] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397323] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397328] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397333] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.484 [2024-06-10 11:30:11.397338] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.484 [2024-06-10 11:30:11.397343] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397348] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397353] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397358] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397363] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397368] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397373] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397378] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397384] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.484 [2024-06-10 11:30:11.397389] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.484 [2024-06-10 11:30:11.397394] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397399] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397404] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.484 [2024-06-10 11:30:11.397409] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.484 [2024-06-10 11:30:11.397414] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397419] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397424] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397430] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:42.484 [2024-06-10 11:30:11.397434] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:21:42.484 [2024-06-10 11:30:11.397439] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:21:42.484 [2024-06-10 11:30:11.397445] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:21:42.484 [2024-06-10 11:30:11.397449] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:21:50.621 11:30:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:21:50.621 [2024-06-10 11:30:19.378020] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xa07af0, err 11. Skip rescan. 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1/net 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:21:50.621 11:30:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:21:50.881 [2024-06-10 11:30:19.720737] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf7fed0/0x9c65a0) succeed. 00:21:50.881 [2024-06-10 11:30:19.720798] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:21:54.180 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:21:54.180 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:54.180 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:54.180 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:54.180 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:54.180 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:21:54.181 [2024-06-10 11:30:23.046198] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:21:54.181 [2024-06-10 11:30:23.046238] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:21:54.181 [2024-06-10 11:30:23.046257] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:21:54.181 [2024-06-10 11:30:23.046272] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:21:54.181 11:30:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 3649530 00:23:01.957 0 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 3649469 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 3649469 ']' 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 3649469 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3649469 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3649469' 00:23:01.957 killing process with pid 3649469 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 3649469 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 3649469 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:23:01.957 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:23:01.957 [2024-06-10 11:29:53.796042] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:23:01.957 [2024-06-10 11:29:53.796096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649469 ] 00:23:01.957 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.957 [2024-06-10 11:29:53.846434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.957 [2024-06-10 11:29:53.898336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.957 Running I/O for 90 seconds... 00:23:01.957 [2024-06-10 11:29:59.940933] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:23:01.957 [2024-06-10 11:29:59.940965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.957 [2024-06-10 11:29:59.940972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32750 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:23:01.957 [2024-06-10 11:29:59.940979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.958 [2024-06-10 11:29:59.940985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32750 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:23:01.958 [2024-06-10 11:29:59.940991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.958 [2024-06-10 11:29:59.940996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32750 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:23:01.958 [2024-06-10 11:29:59.941002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.958 [2024-06-10 11:29:59.941007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32750 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:23:01.958 [2024-06-10 11:29:59.942817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.958 [2024-06-10 11:29:59.942827] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.958 [2024-06-10 11:29:59.942843] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:23:01.958 [2024-06-10 11:29:59.950410] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:29:59.960432] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:29:59.970456] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:29:59.980481] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:29:59.990507] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.000533] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.010560] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.020583] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.030607] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.040631] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.050656] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.060682] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.071168] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.081191] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.091215] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.101239] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.111266] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.121289] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.131951] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.142241] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.152268] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.162464] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.172608] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.182627] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.192654] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.202681] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.213333] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.223357] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.233869] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.244138] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.254249] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.264334] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.274360] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.284943] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.295024] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.305393] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.315418] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.325442] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.335469] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.345493] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.355520] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.365544] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.375568] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.385593] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.395618] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.405642] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.415667] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.425693] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.435718] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.445743] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.455772] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.465796] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.475822] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.485845] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.495872] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.505898] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.515923] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.525948] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.535974] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.546001] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.556023] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.566048] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.576071] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.586096] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.596121] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.606179] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.616343] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.626580] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.637142] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.647167] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.657193] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.667217] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.677816] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.687841] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.697901] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.707926] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.717954] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.728203] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.738228] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.958 [2024-06-10 11:30:00.748744] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.759069] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.769319] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.779418] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.789444] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.799681] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.809737] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.819860] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.829885] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.839912] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.850335] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.860402] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.870459] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.880486] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.890510] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.901255] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.911405] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.921518] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.931783] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.942077] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.959 [2024-06-10 11:30:00.945201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.959 [2024-06-10 11:30:00.945606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.959 [2024-06-10 11:30:00.945611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.960 [2024-06-10 11:30:00.945624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.960 [2024-06-10 11:30:00.945638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.960 [2024-06-10 11:30:00.945651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.960 [2024-06-10 11:30:00.945663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.960 [2024-06-10 11:30:00.945677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.945994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.945999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.946007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.946012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.946020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.946026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.946035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.946041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.946049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.946054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.946062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.946067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.946076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.946081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.946089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x181700 00:23:01.960 [2024-06-10 11:30:00.946094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.960 [2024-06-10 11:30:00.946103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.946498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x181700 00:23:01.961 [2024-06-10 11:30:00.946503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.958535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:01.961 [2024-06-10 11:30:00.958543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:01.961 [2024-06-10 11:30:00.958548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60896 len:8 PRP1 0x0 PRP2 0x0 00:23:01.961 [2024-06-10 11:30:00.958553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.961 [2024-06-10 11:30:00.959537] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.961 [2024-06-10 11:30:00.959880] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:23:01.961 [2024-06-10 11:30:00.959889] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.961 [2024-06-10 11:30:00.959894] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:01.961 [2024-06-10 11:30:00.959905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.961 [2024-06-10 11:30:00.959911] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.961 [2024-06-10 11:30:00.959919] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:23:01.961 [2024-06-10 11:30:00.959924] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:23:01.961 [2024-06-10 11:30:00.959929] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:23:01.961 [2024-06-10 11:30:00.959944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.962 [2024-06-10 11:30:00.959949] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.962 [2024-06-10 11:30:01.962663] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:23:01.962 [2024-06-10 11:30:01.962679] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.962 [2024-06-10 11:30:01.962684] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:01.962 [2024-06-10 11:30:01.962694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.962 [2024-06-10 11:30:01.962700] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.962 [2024-06-10 11:30:01.962708] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:23:01.962 [2024-06-10 11:30:01.962712] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:23:01.962 [2024-06-10 11:30:01.962717] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:23:01.962 [2024-06-10 11:30:01.962732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.962 [2024-06-10 11:30:01.962738] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.962 [2024-06-10 11:30:02.965224] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:23:01.962 [2024-06-10 11:30:02.965241] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.962 [2024-06-10 11:30:02.965247] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:01.962 [2024-06-10 11:30:02.965257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.962 [2024-06-10 11:30:02.965262] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.962 [2024-06-10 11:30:02.965270] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:23:01.962 [2024-06-10 11:30:02.965275] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:23:01.962 [2024-06-10 11:30:02.965280] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:23:01.962 [2024-06-10 11:30:02.965294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.962 [2024-06-10 11:30:02.965299] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.962 [2024-06-10 11:30:03.967976] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:23:01.962 [2024-06-10 11:30:03.967994] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.962 [2024-06-10 11:30:03.967999] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:01.962 [2024-06-10 11:30:03.968009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.962 [2024-06-10 11:30:03.968014] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.962 [2024-06-10 11:30:03.968022] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:23:01.962 [2024-06-10 11:30:03.968026] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:23:01.962 [2024-06-10 11:30:03.968032] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:23:01.962 [2024-06-10 11:30:03.968046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.962 [2024-06-10 11:30:03.968051] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.962 [2024-06-10 11:30:05.972786] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.962 [2024-06-10 11:30:05.972804] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:01.962 [2024-06-10 11:30:05.972817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.962 [2024-06-10 11:30:05.972822] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.962 [2024-06-10 11:30:05.972831] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:23:01.962 [2024-06-10 11:30:05.972835] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:23:01.962 [2024-06-10 11:30:05.972841] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:23:01.962 [2024-06-10 11:30:05.972855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.962 [2024-06-10 11:30:05.972861] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.962 [2024-06-10 11:30:07.977593] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.962 [2024-06-10 11:30:07.977611] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:01.962 [2024-06-10 11:30:07.977625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.962 [2024-06-10 11:30:07.977631] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.962 [2024-06-10 11:30:07.977640] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:23:01.962 [2024-06-10 11:30:07.977645] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:23:01.962 [2024-06-10 11:30:07.977650] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:23:01.962 [2024-06-10 11:30:07.977665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.962 [2024-06-10 11:30:07.977670] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.962 [2024-06-10 11:30:09.982486] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.962 [2024-06-10 11:30:09.982503] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:01.962 [2024-06-10 11:30:09.982515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.962 [2024-06-10 11:30:09.982520] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.962 [2024-06-10 11:30:09.982528] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:23:01.962 [2024-06-10 11:30:09.982533] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:23:01.962 [2024-06-10 11:30:09.982538] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:23:01.962 [2024-06-10 11:30:09.982551] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.962 [2024-06-10 11:30:09.982556] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.962 [2024-06-10 11:30:11.392432] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:23:01.962 [2024-06-10 11:30:11.392457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.962 [2024-06-10 11:30:11.392464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32750 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:23:01.962 [2024-06-10 11:30:11.392470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.962 [2024-06-10 11:30:11.392475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32750 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:23:01.962 [2024-06-10 11:30:11.392481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.962 [2024-06-10 11:30:11.392486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32750 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:23:01.962 [2024-06-10 11:30:11.392491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.962 [2024-06-10 11:30:11.392496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32750 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:23:01.962 [2024-06-10 11:30:11.402078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.962 [2024-06-10 11:30:11.402094] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.962 [2024-06-10 11:30:11.402119] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:23:01.963 [2024-06-10 11:30:11.402443] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.412468] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.422493] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.432519] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.442547] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.452571] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.462596] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.472620] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.482647] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.492674] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.502702] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.512728] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.522755] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.532781] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.542809] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.552833] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.562861] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.572885] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.582908] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.592935] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.602961] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.612984] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.623011] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.633038] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.643064] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.653091] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.663118] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.673142] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.683170] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.693195] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.703222] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.713246] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.723271] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.733299] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.743324] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.753351] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.763378] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.773405] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.783430] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.793457] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.803482] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.813507] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.823532] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.833559] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.843583] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.853607] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.863631] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.873656] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.883680] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.893705] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.903730] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.913754] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.923781] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.933806] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.943831] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.953855] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.963879] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.973904] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.983928] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:11.988055] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.963 [2024-06-10 11:30:11.988062] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:01.963 [2024-06-10 11:30:11.988074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.963 [2024-06-10 11:30:11.988080] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:23:01.963 [2024-06-10 11:30:11.988088] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:23:01.963 [2024-06-10 11:30:11.988093] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:23:01.963 [2024-06-10 11:30:11.988098] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:23:01.963 [2024-06-10 11:30:11.988113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.963 [2024-06-10 11:30:11.988119] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:23:01.963 [2024-06-10 11:30:11.993952] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.003976] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.014000] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.024024] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.034047] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.044072] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.054097] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.064122] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.074146] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.084170] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.094196] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.104220] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.114246] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.124272] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.134297] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.144321] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.154345] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.164370] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.174396] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.184420] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.194445] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.963 [2024-06-10 11:30:12.204470] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.214494] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.224518] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.234544] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.244569] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.254593] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.264619] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.274645] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.284672] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.294696] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.304725] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.314745] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.324770] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.334795] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.344821] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.354847] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.364873] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.374898] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.384923] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.394948] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:01.964 [2024-06-10 11:30:12.404476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.964 [2024-06-10 11:30:12.404816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.964 [2024-06-10 11:30:12.404822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.404991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.404996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.965 [2024-06-10 11:30:12.405249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.965 [2024-06-10 11:30:12.405253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.966 [2024-06-10 11:30:12.405588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.966 [2024-06-10 11:30:12.405594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.967 [2024-06-10 11:30:12.405599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.967 [2024-06-10 11:30:12.405610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.405911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x1bf400 00:23:01.967 [2024-06-10 11:30:12.405916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32750 cdw0:ea3bc280 sqhd:f540 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.417952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:01.967 [2024-06-10 11:30:12.417961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:01.967 [2024-06-10 11:30:12.417966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52432 len:8 PRP1 0x0 PRP2 0x0 00:23:01.967 [2024-06-10 11:30:12.417971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.967 [2024-06-10 11:30:12.418006] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:12.418297] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:23:01.968 [2024-06-10 11:30:12.418305] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.968 [2024-06-10 11:30:12.418309] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:23:01.968 [2024-06-10 11:30:12.418319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.968 [2024-06-10 11:30:12.418324] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.968 [2024-06-10 11:30:12.418331] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:23:01.968 [2024-06-10 11:30:12.418336] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:23:01.968 [2024-06-10 11:30:12.418340] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:23:01.968 [2024-06-10 11:30:12.418353] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.968 [2024-06-10 11:30:12.418358] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:13.038673] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:01.968 [2024-06-10 11:30:13.420750] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:23:01.968 [2024-06-10 11:30:13.420765] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.968 [2024-06-10 11:30:13.420770] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:23:01.968 [2024-06-10 11:30:13.420780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.968 [2024-06-10 11:30:13.420785] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.968 [2024-06-10 11:30:13.420796] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:23:01.968 [2024-06-10 11:30:13.420800] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:23:01.968 [2024-06-10 11:30:13.420805] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:23:01.968 [2024-06-10 11:30:13.420816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.968 [2024-06-10 11:30:13.420821] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:14.423310] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:23:01.968 [2024-06-10 11:30:14.423333] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.968 [2024-06-10 11:30:14.423339] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:23:01.968 [2024-06-10 11:30:14.423350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.968 [2024-06-10 11:30:14.423356] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.968 [2024-06-10 11:30:14.423375] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:23:01.968 [2024-06-10 11:30:14.423380] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:23:01.968 [2024-06-10 11:30:14.423385] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:23:01.968 [2024-06-10 11:30:14.423400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.968 [2024-06-10 11:30:14.423406] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:15.426044] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:23:01.968 [2024-06-10 11:30:15.426074] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.968 [2024-06-10 11:30:15.426080] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:23:01.968 [2024-06-10 11:30:15.426093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.968 [2024-06-10 11:30:15.426099] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.968 [2024-06-10 11:30:15.426131] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:23:01.968 [2024-06-10 11:30:15.426138] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:23:01.968 [2024-06-10 11:30:15.426143] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:23:01.968 [2024-06-10 11:30:15.426162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.968 [2024-06-10 11:30:15.426168] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:17.432977] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.968 [2024-06-10 11:30:17.433002] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:23:01.968 [2024-06-10 11:30:17.433020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.968 [2024-06-10 11:30:17.433025] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.968 [2024-06-10 11:30:17.433041] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:23:01.968 [2024-06-10 11:30:17.433046] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:23:01.968 [2024-06-10 11:30:17.433051] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:23:01.968 [2024-06-10 11:30:17.433068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.968 [2024-06-10 11:30:17.433074] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:19.438977] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.968 [2024-06-10 11:30:19.438999] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:23:01.968 [2024-06-10 11:30:19.439016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.968 [2024-06-10 11:30:19.439022] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.968 [2024-06-10 11:30:19.439032] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:23:01.968 [2024-06-10 11:30:19.439036] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:23:01.968 [2024-06-10 11:30:19.439042] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:23:01.968 [2024-06-10 11:30:19.439062] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.968 [2024-06-10 11:30:19.439068] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:21.444036] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.968 [2024-06-10 11:30:21.444067] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:23:01.968 [2024-06-10 11:30:21.444088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.968 [2024-06-10 11:30:21.444094] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.968 [2024-06-10 11:30:21.444414] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:23:01.968 [2024-06-10 11:30:21.444421] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:23:01.968 [2024-06-10 11:30:21.444426] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:23:01.968 [2024-06-10 11:30:21.444451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.968 [2024-06-10 11:30:21.444458] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:23.450359] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:23:01.968 [2024-06-10 11:30:23.450382] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:23:01.968 [2024-06-10 11:30:23.450402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.968 [2024-06-10 11:30:23.450409] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:23:01.968 [2024-06-10 11:30:23.450418] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:23:01.968 [2024-06-10 11:30:23.450423] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:23:01.968 [2024-06-10 11:30:23.450428] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:23:01.968 [2024-06-10 11:30:23.450754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.968 [2024-06-10 11:30:23.450766] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:23:01.968 [2024-06-10 11:30:24.505694] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:01.968 00:23:01.968 Latency(us) 00:23:01.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.968 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:01.968 Verification LBA range: start 0x0 length 0x8000 00:23:01.968 Nvme_mlx_0_0n1 : 90.00 13282.88 51.89 0.00 0.00 9617.03 942.08 14036937.39 00:23:01.968 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:01.968 Verification LBA range: start 0x0 length 0x8000 00:23:01.968 Nvme_mlx_0_1n1 : 90.01 8890.89 34.73 0.00 0.00 14381.79 2457.60 14092861.44 00:23:01.968 =================================================================================================================== 00:23:01.968 Total : 22173.77 86.62 0.00 0.00 11527.58 942.08 14092861.44 00:23:01.968 Received shutdown signal, test time was about 90.000000 seconds 00:23:01.968 00:23:01.968 Latency(us) 00:23:01.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.968 =================================================================================================================== 00:23:01.968 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 3649105 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 3649105 ']' 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 3649105 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3649105 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3649105' 00:23:01.969 killing process with pid 3649105 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 3649105 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 3649105 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:23:01.969 00:23:01.969 real 1m33.009s 00:23:01.969 user 4m20.598s 00:23:01.969 sys 0m5.650s 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:23:01.969 ************************************ 00:23:01.969 END TEST nvmf_device_removal_pci_remove_no_srq 00:23:01.969 ************************************ 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:23:01.969 ************************************ 00:23:01.969 START TEST nvmf_device_removal_pci_remove 00:23:01.969 ************************************ 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1124 -- # test_remove_and_rescan 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=3668160 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 3668160 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 3668160 ']' 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:01.969 11:31:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.969 [2024-06-10 11:31:25.737975] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:23:01.969 [2024-06-10 11:31:25.738024] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.969 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.969 [2024-06-10 11:31:25.800882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:01.969 [2024-06-10 11:31:25.870915] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.969 [2024-06-10 11:31:25.870954] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.969 [2024-06-10 11:31:25.870962] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.969 [2024-06-10 11:31:25.870969] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.969 [2024-06-10 11:31:25.870974] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.969 [2024-06-10 11:31:25.871114] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.969 [2024-06-10 11:31:25.871116] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.969 [2024-06-10 11:31:26.596292] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1302a20/0x1306f10) succeed. 00:23:01.969 [2024-06-10 11:31:26.609499] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1303f20/0x13485a0) succeed. 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:23:01.969 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.970 [2024-06-10 11:31:26.793804] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.970 [2024-06-10 11:31:26.878306] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=3668372 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.970 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 3668372 /var/tmp/bdevperf.sock 00:23:01.971 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:01.971 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 3668372 ']' 00:23:01.971 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.971 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:01.971 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.971 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:01.971 11:31:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.971 Nvme_mlx_0_0n1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:01.971 Nvme_mlx_0_1n1 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3668554 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:23:01.971 11:31:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0/infiniband 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:04.515 11:31:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:23:04.515 11:31:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.515 mlx5_0 00:23:04.515 11:31:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:23:04.515 11:31:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:23:04.515 11:31:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:23:04.515 11:31:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:23:04.515 11:31:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:23:04.515 11:31:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:23:04.515 11:31:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:23:04.515 [2024-06-10 11:31:33.092145] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:23:04.515 [2024-06-10 11:31:33.092217] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:23:04.515 [2024-06-10 11:31:33.095849] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:23:04.515 [2024-06-10 11:31:33.095874] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 42 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:12.656 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:23:12.657 11:31:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:23:12.657 [2024-06-10 11:31:41.286682] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x13036f0, err 11. Skip rescan. 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0/net 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:23:12.657 11:31:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:23:12.917 [2024-06-10 11:31:41.673388] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1305940/0x1306f10) succeed. 00:23:12.917 [2024-06-10 11:31:41.673439] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 [2024-06-10 11:31:44.821327] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:16.219 [2024-06-10 11:31:44.821355] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:23:16.219 [2024-06-10 11:31:44.821367] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:23:16.219 [2024-06-10 11:31:44.821376] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1/infiniband 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.219 mlx5_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:23:16.219 11:31:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:23:16.219 [2024-06-10 11:31:45.006901] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:23:16.219 [2024-06-10 11:31:45.006970] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:23:16.219 [2024-06-10 11:31:45.014053] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:23:16.219 [2024-06-10 11:31:45.014124] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 129 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:23:24.359 11:31:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:23:24.359 [2024-06-10 11:31:53.256208] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x13de990, err 11. Skip rescan. 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1/net 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:23:24.620 11:31:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:23:24.881 [2024-06-10 11:31:53.645146] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1305d30/0x13485a0) succeed. 00:23:24.881 [2024-06-10 11:31:53.645207] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:23:28.223 [2024-06-10 11:31:56.901816] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:23:28.223 [2024-06-10 11:31:56.901852] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:23:28.223 [2024-06-10 11:31:56.901864] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:23:28.223 [2024-06-10 11:31:56.901876] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:23:28.223 11:31:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 3668554 00:24:35.960 0 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@118 -- # killprocess 3668372 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 3668372 ']' 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 3668372 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3668372 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3668372' 00:24:35.960 killing process with pid 3668372 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 3668372 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 3668372 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@119 -- # bdevperf_pid= 00:24:35.960 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:24:35.960 [2024-06-10 11:31:26.933784] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:24:35.960 [2024-06-10 11:31:26.933835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668372 ] 00:24:35.960 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.960 [2024-06-10 11:31:26.983271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.960 [2024-06-10 11:31:27.035647] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.960 Running I/O for 90 seconds... 00:24:35.960 [2024-06-10 11:31:33.086856] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:24:35.960 [2024-06-10 11:31:33.086890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.960 [2024-06-10 11:31:33.086898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:6 sqhd:f3b9 p:0 m:0 dnr:0 00:24:35.960 [2024-06-10 11:31:33.086904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.960 [2024-06-10 11:31:33.086909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:6 sqhd:f3b9 p:0 m:0 dnr:0 00:24:35.960 [2024-06-10 11:31:33.086915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.960 [2024-06-10 11:31:33.086920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:6 sqhd:f3b9 p:0 m:0 dnr:0 00:24:35.960 [2024-06-10 11:31:33.086925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.960 [2024-06-10 11:31:33.086930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:6 sqhd:f3b9 p:0 m:0 dnr:0 00:24:35.960 [2024-06-10 11:31:33.089148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.960 [2024-06-10 11:31:33.089157] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.960 [2024-06-10 11:31:33.089184] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:24:35.960 [2024-06-10 11:31:33.096215] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.960 [2024-06-10 11:31:33.106241] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.960 [2024-06-10 11:31:33.116264] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.960 [2024-06-10 11:31:33.126290] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.960 [2024-06-10 11:31:33.136315] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.960 [2024-06-10 11:31:33.146340] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.960 [2024-06-10 11:31:33.156366] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.166390] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.176415] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.186441] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.196466] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.206491] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.216515] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.226541] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.236566] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.246591] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.256618] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.266644] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.276668] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.286692] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.296717] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.306743] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.316769] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.326794] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.336822] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.346846] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.356871] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.366895] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.376920] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.387018] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.397044] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.407071] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.417146] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.427282] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.437306] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.447889] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.457997] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.468023] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.478293] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.488475] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.498501] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.508595] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.518717] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.528738] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.539382] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.549693] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.559914] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.569937] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.580357] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.590539] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.600564] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.610600] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.620626] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.630652] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.640677] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.650703] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.660728] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.670752] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.680779] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.690805] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.700831] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.710857] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.720884] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.730909] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.740934] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.750958] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.760984] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.771009] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.781034] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.791061] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.801086] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.811110] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.821134] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.831158] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.841184] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.851209] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.861805] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.871831] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.882006] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.892304] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.902328] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.912992] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.923016] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.933635] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.943659] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.954005] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.964031] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.974368] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.984702] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:33.994727] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:34.004753] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:34.014780] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.961 [2024-06-10 11:31:34.024823] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.962 [2024-06-10 11:31:34.034848] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.962 [2024-06-10 11:31:34.044872] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.962 [2024-06-10 11:31:34.055236] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.962 [2024-06-10 11:31:34.065261] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.962 [2024-06-10 11:31:34.075933] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.962 [2024-06-10 11:31:34.085957] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.962 [2024-06-10 11:31:34.091543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.091985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.091996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.092001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.092012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.092017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.092028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x1810ef 00:24:35.962 [2024-06-10 11:31:34.092033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.962 [2024-06-10 11:31:34.092045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.092242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x1810ef 00:24:35.963 [2024-06-10 11:31:34.092247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.104290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.963 [2024-06-10 11:31:34.104298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.963 [2024-06-10 11:31:34.104303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64296 len:8 PRP1 0x0 PRP2 0x0 00:24:35.963 [2024-06-10 11:31:34.104308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.963 [2024-06-10 11:31:34.107119] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.963 [2024-06-10 11:31:34.107363] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:24:35.963 [2024-06-10 11:31:34.107378] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.963 [2024-06-10 11:31:34.107383] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:35.963 [2024-06-10 11:31:34.107394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.963 [2024-06-10 11:31:34.107400] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.963 [2024-06-10 11:31:34.107424] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:24:35.963 [2024-06-10 11:31:34.107428] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:24:35.963 [2024-06-10 11:31:34.107434] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:24:35.963 [2024-06-10 11:31:34.107448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.963 [2024-06-10 11:31:34.107453] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.963 [2024-06-10 11:31:35.109973] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:24:35.963 [2024-06-10 11:31:35.109989] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.963 [2024-06-10 11:31:35.109994] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:35.963 [2024-06-10 11:31:35.110004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.963 [2024-06-10 11:31:35.110009] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.963 [2024-06-10 11:31:35.110017] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:24:35.963 [2024-06-10 11:31:35.110021] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:24:35.963 [2024-06-10 11:31:35.110026] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:24:35.963 [2024-06-10 11:31:35.110041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.963 [2024-06-10 11:31:35.110046] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.963 [2024-06-10 11:31:36.112548] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:24:35.963 [2024-06-10 11:31:36.112566] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.963 [2024-06-10 11:31:36.112571] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:35.963 [2024-06-10 11:31:36.112582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.963 [2024-06-10 11:31:36.112587] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.963 [2024-06-10 11:31:36.112595] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:24:35.963 [2024-06-10 11:31:36.112600] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:24:35.963 [2024-06-10 11:31:36.112605] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:24:35.963 [2024-06-10 11:31:36.112619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.963 [2024-06-10 11:31:36.112625] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.963 [2024-06-10 11:31:37.115367] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:24:35.963 [2024-06-10 11:31:37.115386] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.963 [2024-06-10 11:31:37.115391] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:35.963 [2024-06-10 11:31:37.115405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.963 [2024-06-10 11:31:37.115410] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.963 [2024-06-10 11:31:37.115418] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:24:35.963 [2024-06-10 11:31:37.115422] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:24:35.963 [2024-06-10 11:31:37.115427] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:24:35.963 [2024-06-10 11:31:37.115442] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.963 [2024-06-10 11:31:37.115447] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.963 [2024-06-10 11:31:39.120850] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.963 [2024-06-10 11:31:39.120871] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:35.963 [2024-06-10 11:31:39.120886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.963 [2024-06-10 11:31:39.120892] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.963 [2024-06-10 11:31:39.120901] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:24:35.963 [2024-06-10 11:31:39.120906] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:24:35.963 [2024-06-10 11:31:39.120912] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:24:35.963 [2024-06-10 11:31:39.120929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.963 [2024-06-10 11:31:39.120935] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.964 [2024-06-10 11:31:41.125855] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.964 [2024-06-10 11:31:41.125878] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:35.964 [2024-06-10 11:31:41.125894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.964 [2024-06-10 11:31:41.125900] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.964 [2024-06-10 11:31:41.125909] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:24:35.964 [2024-06-10 11:31:41.125914] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:24:35.964 [2024-06-10 11:31:41.125920] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:24:35.964 [2024-06-10 11:31:41.125936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.964 [2024-06-10 11:31:41.125941] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.964 [2024-06-10 11:31:43.130852] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.964 [2024-06-10 11:31:43.130871] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:35.964 [2024-06-10 11:31:43.130885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.964 [2024-06-10 11:31:43.130891] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.964 [2024-06-10 11:31:43.130899] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:24:35.964 [2024-06-10 11:31:43.130908] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:24:35.964 [2024-06-10 11:31:43.130914] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:24:35.964 [2024-06-10 11:31:43.130929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.964 [2024-06-10 11:31:43.130935] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.964 [2024-06-10 11:31:45.000866] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:24:35.964 [2024-06-10 11:31:45.000887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.964 [2024-06-10 11:31:45.000894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:f3b9 p:0 m:0 dnr:0 00:24:35.964 [2024-06-10 11:31:45.000901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.964 [2024-06-10 11:31:45.000906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:f3b9 p:0 m:0 dnr:0 00:24:35.964 [2024-06-10 11:31:45.000912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.964 [2024-06-10 11:31:45.000917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:f3b9 p:0 m:0 dnr:0 00:24:35.964 [2024-06-10 11:31:45.000922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.964 [2024-06-10 11:31:45.000927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:f3b9 p:0 m:0 dnr:0 00:24:35.964 [2024-06-10 11:31:45.019491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.964 [2024-06-10 11:31:45.019518] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.964 [2024-06-10 11:31:45.019547] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:24:35.964 [2024-06-10 11:31:45.019588] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.029587] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.039612] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.049638] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.059662] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.069687] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.079715] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.089741] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.099773] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.109796] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.119821] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.129847] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.135847] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.964 [2024-06-10 11:31:45.135857] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:35.964 [2024-06-10 11:31:45.135870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.964 [2024-06-10 11:31:45.135876] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:24:35.964 [2024-06-10 11:31:45.135884] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:24:35.964 [2024-06-10 11:31:45.135889] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:24:35.964 [2024-06-10 11:31:45.135894] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:24:35.964 [2024-06-10 11:31:45.135908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.964 [2024-06-10 11:31:45.135914] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:24:35.964 [2024-06-10 11:31:45.139869] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.149893] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.159918] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.169943] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.179969] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.189994] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.200019] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.210044] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.220069] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.230094] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.240120] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.250144] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.260169] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.270193] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.280218] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.290241] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.300267] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.310293] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.320319] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.330345] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.340369] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.350394] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.360418] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.370443] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.380467] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.390492] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.400518] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.410543] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.420566] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.430592] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.440617] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.964 [2024-06-10 11:31:45.450642] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.460668] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.470692] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.480716] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.490740] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.500768] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.510793] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.520817] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.530840] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.540867] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.550892] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.560916] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.570941] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.580965] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.590988] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.601013] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.611038] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.621062] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.631087] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.641111] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.651135] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.661159] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.671185] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.681209] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.691233] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.701258] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.711284] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.721307] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.731332] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.741357] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.751383] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.761408] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.771432] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.781457] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.791484] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.801507] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.811531] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.821555] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.831581] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.841605] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.851629] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.861655] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.871680] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.881704] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.891728] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.901753] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.911777] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.921803] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.931827] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.941853] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.951877] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.961903] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.971927] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.981951] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:45.991975] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:46.002001] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:46.012024] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.965 [2024-06-10 11:31:46.021905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1bf0ef 00:24:35.965 [2024-06-10 11:31:46.021914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.965 [2024-06-10 11:31:46.021924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1bf0ef 00:24:35.965 [2024-06-10 11:31:46.021930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.965 [2024-06-10 11:31:46.021937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1bf0ef 00:24:35.965 [2024-06-10 11:31:46.021942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.965 [2024-06-10 11:31:46.021948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1bf0ef 00:24:35.965 [2024-06-10 11:31:46.021953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.965 [2024-06-10 11:31:46.021959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.021964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.021971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.021976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.021982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.021987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.021994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.021998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x1bf0ef 00:24:35.966 [2024-06-10 11:31:46.022328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.966 [2024-06-10 11:31:46.022335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0x1bf0ef 00:24:35.967 [2024-06-10 11:31:46.022714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.967 [2024-06-10 11:31:46.022721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0x1bf0ef 00:24:35.968 [2024-06-10 11:31:46.022855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.022992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.022998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.968 [2024-06-10 11:31:46.023103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.968 [2024-06-10 11:31:46.023109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.023351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.023356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.032224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.969 [2024-06-10 11:31:46.032244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:b0e41b60 sqhd:4540 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.044291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.969 [2024-06-10 11:31:46.044300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.969 [2024-06-10 11:31:46.044305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:24:35.969 [2024-06-10 11:31:46.044310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.969 [2024-06-10 11:31:46.044342] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.969 [2024-06-10 11:31:46.044631] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:24:35.969 [2024-06-10 11:31:46.044640] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.969 [2024-06-10 11:31:46.044644] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:24:35.969 [2024-06-10 11:31:46.044654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.969 [2024-06-10 11:31:46.044659] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.969 [2024-06-10 11:31:46.044667] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:24:35.969 [2024-06-10 11:31:46.044671] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:24:35.969 [2024-06-10 11:31:46.044676] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:24:35.969 [2024-06-10 11:31:46.044689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.969 [2024-06-10 11:31:46.044694] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.969 [2024-06-10 11:31:46.183804] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:35.969 [2024-06-10 11:31:47.047170] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:24:35.969 [2024-06-10 11:31:47.047182] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.969 [2024-06-10 11:31:47.047187] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:24:35.969 [2024-06-10 11:31:47.047197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.969 [2024-06-10 11:31:47.047201] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.969 [2024-06-10 11:31:47.047209] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:24:35.970 [2024-06-10 11:31:47.047216] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:24:35.970 [2024-06-10 11:31:47.047221] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:24:35.970 [2024-06-10 11:31:47.047232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.970 [2024-06-10 11:31:47.047237] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.970 [2024-06-10 11:31:48.051202] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:24:35.970 [2024-06-10 11:31:48.051231] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.970 [2024-06-10 11:31:48.051236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:24:35.970 [2024-06-10 11:31:48.051248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.970 [2024-06-10 11:31:48.051253] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.970 [2024-06-10 11:31:48.051261] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:24:35.970 [2024-06-10 11:31:48.051266] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:24:35.970 [2024-06-10 11:31:48.051272] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:24:35.970 [2024-06-10 11:31:48.051287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.970 [2024-06-10 11:31:48.051293] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.970 [2024-06-10 11:31:49.053846] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:24:35.970 [2024-06-10 11:31:49.053877] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.970 [2024-06-10 11:31:49.053883] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:24:35.970 [2024-06-10 11:31:49.053895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.970 [2024-06-10 11:31:49.053901] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.970 [2024-06-10 11:31:49.053910] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:24:35.970 [2024-06-10 11:31:49.053915] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:24:35.970 [2024-06-10 11:31:49.053920] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:24:35.970 [2024-06-10 11:31:49.053937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.970 [2024-06-10 11:31:49.053942] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.970 [2024-06-10 11:31:51.060119] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.970 [2024-06-10 11:31:51.060147] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:24:35.970 [2024-06-10 11:31:51.060166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.970 [2024-06-10 11:31:51.060172] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.970 [2024-06-10 11:31:51.060203] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:24:35.970 [2024-06-10 11:31:51.060208] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:24:35.970 [2024-06-10 11:31:51.060220] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:24:35.970 [2024-06-10 11:31:51.060247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.970 [2024-06-10 11:31:51.060253] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.970 [2024-06-10 11:31:53.065429] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.970 [2024-06-10 11:31:53.065454] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:24:35.970 [2024-06-10 11:31:53.065470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.970 [2024-06-10 11:31:53.065475] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.970 [2024-06-10 11:31:53.065484] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:24:35.970 [2024-06-10 11:31:53.065490] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:24:35.970 [2024-06-10 11:31:53.065495] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:24:35.970 [2024-06-10 11:31:53.065824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.970 [2024-06-10 11:31:53.065833] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.970 [2024-06-10 11:31:55.070852] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.970 [2024-06-10 11:31:55.070882] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:24:35.970 [2024-06-10 11:31:55.070899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.970 [2024-06-10 11:31:55.070905] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.970 [2024-06-10 11:31:55.070932] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:24:35.970 [2024-06-10 11:31:55.070938] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:24:35.970 [2024-06-10 11:31:55.070943] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:24:35.970 [2024-06-10 11:31:55.070967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.970 [2024-06-10 11:31:55.070973] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.970 [2024-06-10 11:31:57.078039] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:24:35.970 [2024-06-10 11:31:57.078061] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:24:35.970 [2024-06-10 11:31:57.078078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:35.970 [2024-06-10 11:31:57.078085] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:24:35.970 [2024-06-10 11:31:57.078106] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:24:35.971 [2024-06-10 11:31:57.078111] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:24:35.971 [2024-06-10 11:31:57.078117] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:24:35.971 [2024-06-10 11:31:57.078147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.971 [2024-06-10 11:31:57.078153] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:24:35.971 [2024-06-10 11:31:58.139104] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:35.971 00:24:35.971 Latency(us) 00:24:35.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.971 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:35.971 Verification LBA range: start 0x0 length 0x8000 00:24:35.971 Nvme_mlx_0_0n1 : 90.01 13180.93 51.49 0.00 0.00 9691.30 1925.12 14036937.39 00:24:35.971 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:35.971 Verification LBA range: start 0x0 length 0x8000 00:24:35.971 Nvme_mlx_0_1n1 : 90.01 8768.49 34.25 0.00 0.00 14580.00 2334.72 14092861.44 00:24:35.971 =================================================================================================================== 00:24:35.971 Total : 21949.42 85.74 0.00 0.00 11644.26 1925.12 14092861.44 00:24:35.971 Received shutdown signal, test time was about 90.000000 seconds 00:24:35.971 00:24:35.971 Latency(us) 00:24:35.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.971 =================================================================================================================== 00:24:35.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@202 -- # killprocess 3668160 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 3668160 ']' 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 3668160 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3668160 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3668160' 00:24:35.971 killing process with pid 3668160 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 3668160 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 3668160 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@203 -- # nvmfpid= 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@205 -- # return 0 00:24:35.971 00:24:35.971 real 1m33.081s 00:24:35.971 user 4m20.208s 00:24:35.971 sys 0m5.904s 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:24:35.971 ************************************ 00:24:35.971 END TEST nvmf_device_removal_pci_remove 00:24:35.971 ************************************ 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@317 -- # nvmftestfini 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@117 -- # sync 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@120 -- # set +e 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:35.971 rmmod nvme_rdma 00:24:35.971 rmmod nvme_fabrics 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@124 -- # set -e 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@125 -- # return 0 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@318 -- # clean_bond_device 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # ip link 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # grep bond_nvmf 00:24:35.971 00:24:35.971 real 3m13.720s 00:24:35.971 user 8m43.078s 00:24:35.971 sys 0m17.024s 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:35.971 11:32:58 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:24:35.971 ************************************ 00:24:35.971 END TEST nvmf_device_removal 00:24:35.971 ************************************ 00:24:35.971 11:32:58 nvmf_rdma -- nvmf/nvmf.sh@80 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:24:35.971 11:32:58 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:35.971 11:32:58 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:35.971 11:32:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:35.971 ************************************ 00:24:35.971 START TEST nvmf_srq_overwhelm 00:24:35.971 ************************************ 00:24:35.971 11:32:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:24:35.971 * Looking for test storage... 00:24:35.971 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.971 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.972 11:32:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.916 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:37.179 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:37.179 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:37.179 Found net devices under 0000:98:00.0: mlx_0_0 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:37.179 Found net devices under 0000:98:00.1: mlx_0_1 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:37.179 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:37.180 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:37.180 11:33:05 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:37.180 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:37.180 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:24:37.180 altname enp152s0f0np0 00:24:37.180 altname ens817f0np0 00:24:37.180 inet 192.168.100.8/24 scope global mlx_0_0 00:24:37.180 valid_lft forever preferred_lft forever 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:37.180 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:37.180 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:24:37.180 altname enp152s0f1np1 00:24:37.180 altname ens817f1np1 00:24:37.180 inet 192.168.100.9/24 scope global mlx_0_1 00:24:37.180 valid_lft forever preferred_lft forever 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:37.180 192.168.100.9' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:37.180 192.168.100.9' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:37.180 192.168.100.9' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=3690192 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 3690192 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@830 -- # '[' -z 3690192 ']' 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:37.180 11:33:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:37.441 [2024-06-10 11:33:06.184886] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:24:37.441 [2024-06-10 11:33:06.184943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.441 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.441 [2024-06-10 11:33:06.246947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.441 [2024-06-10 11:33:06.313440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.441 [2024-06-10 11:33:06.313478] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.441 [2024-06-10 11:33:06.313487] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.441 [2024-06-10 11:33:06.313493] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.441 [2024-06-10 11:33:06.313499] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.441 [2024-06-10 11:33:06.316780] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.441 [2024-06-10 11:33:06.316817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.441 [2024-06-10 11:33:06.320889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.441 [2024-06-10 11:33:06.320890] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@863 -- # return 0 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:38.384 [2024-06-10 11:33:07.097147] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb7a0b0/0xb7e5a0) succeed. 00:24:38.384 [2024-06-10 11:33:07.111609] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb7b6f0/0xbbfc30) succeed. 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:38.384 Malloc0 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:38.384 [2024-06-10 11:33:07.214266] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.384 11:33:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme0n1 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme0n1 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:39.768 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.769 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.769 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:39.769 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.769 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:39.769 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.769 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:40.029 Malloc1 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.029 11:33:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme1n1 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme1n1 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:41.414 Malloc2 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.414 11:33:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme2n1 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme2n1 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.800 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:43.061 Malloc3 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.061 11:33:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme3n1 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme3n1 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:44.444 Malloc4 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.444 11:33:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:24:45.861 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:24:45.861 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:24:45.861 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:24:45.861 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme4n1 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme4n1 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:46.122 Malloc5 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.122 11:33:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:24:47.506 11:33:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:24:47.506 11:33:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:24:47.506 11:33:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme5n1 00:24:47.506 11:33:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:24:47.506 11:33:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:24:47.506 11:33:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme5n1 00:24:47.506 11:33:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:24:47.506 11:33:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:24:47.506 [global] 00:24:47.506 thread=1 00:24:47.506 invalidate=1 00:24:47.506 rw=read 00:24:47.506 time_based=1 00:24:47.506 runtime=10 00:24:47.506 ioengine=libaio 00:24:47.507 direct=1 00:24:47.507 bs=1048576 00:24:47.507 iodepth=128 00:24:47.507 norandommap=1 00:24:47.507 numjobs=13 00:24:47.507 00:24:47.507 [job0] 00:24:47.507 filename=/dev/nvme0n1 00:24:47.507 [job1] 00:24:47.507 filename=/dev/nvme1n1 00:24:47.507 [job2] 00:24:47.507 filename=/dev/nvme2n1 00:24:47.507 [job3] 00:24:47.507 filename=/dev/nvme3n1 00:24:47.507 [job4] 00:24:47.507 filename=/dev/nvme4n1 00:24:47.507 [job5] 00:24:47.507 filename=/dev/nvme5n1 00:24:47.806 Could not set queue depth (nvme0n1) 00:24:47.806 Could not set queue depth (nvme1n1) 00:24:47.806 Could not set queue depth (nvme2n1) 00:24:47.806 Could not set queue depth (nvme3n1) 00:24:47.806 Could not set queue depth (nvme4n1) 00:24:47.806 Could not set queue depth (nvme5n1) 00:24:48.072 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:48.072 ... 00:24:48.072 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:48.072 ... 00:24:48.072 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:48.072 ... 00:24:48.072 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:48.072 ... 00:24:48.072 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:48.072 ... 00:24:48.072 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:48.072 ... 00:24:48.072 fio-3.35 00:24:48.072 Starting 78 threads 00:25:00.305 00:25:00.305 job0: (groupid=0, jobs=1): err= 0: pid=3693003: Mon Jun 10 11:33:27 2024 00:25:00.305 read: IOPS=1, BW=1656KiB/s (1696kB/s)(17.0MiB/10511msec) 00:25:00.305 slat (usec): min=686, max=3935.2k, avg=615372.15, stdev=1190106.89 00:25:00.305 clat (msec): min=48, max=10509, avg=8042.25, stdev=3602.36 00:25:00.305 lat (msec): min=2181, max=10510, avg=8657.63, stdev=2993.67 00:25:00.305 clat percentiles (msec): 00:25:00.305 | 1.00th=[ 50], 5.00th=[ 50], 10.00th=[ 2198], 20.00th=[ 4329], 00:25:00.305 | 30.00th=[ 6477], 40.00th=[10402], 50.00th=[10402], 60.00th=[10537], 00:25:00.305 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:25:00.305 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:25:00.305 | 99.99th=[10537] 00:25:00.305 lat (msec) : 50=5.88%, >=2000=94.12% 00:25:00.305 cpu : usr=0.00%, sys=0.13%, ctx=70, majf=0, minf=4353 00:25:00.305 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:25:00.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.305 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:00.305 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.306 job0: (groupid=0, jobs=1): err= 0: pid=3693004: Mon Jun 10 11:33:27 2024 00:25:00.306 read: IOPS=107, BW=108MiB/s (113MB/s)(1160MiB/10759msec) 00:25:00.306 slat (usec): min=24, max=2071.7k, avg=9236.86, stdev=69921.61 00:25:00.306 clat (msec): min=37, max=2519, avg=1088.12, stdev=729.01 00:25:00.306 lat (msec): min=317, max=2520, avg=1097.36, stdev=730.56 00:25:00.306 clat percentiles (msec): 00:25:00.306 | 1.00th=[ 317], 5.00th=[ 330], 10.00th=[ 363], 20.00th=[ 384], 00:25:00.306 | 30.00th=[ 401], 40.00th=[ 558], 50.00th=[ 1083], 60.00th=[ 1116], 00:25:00.306 | 70.00th=[ 1452], 80.00th=[ 1938], 90.00th=[ 2232], 95.00th=[ 2366], 00:25:00.306 | 99.00th=[ 2500], 99.50th=[ 2500], 99.90th=[ 2534], 99.95th=[ 2534], 00:25:00.306 | 99.99th=[ 2534] 00:25:00.306 bw ( KiB/s): min=16384, max=389120, per=3.65%, avg=162545.00, stdev=119221.34, samples=13 00:25:00.306 iops : min= 16, max= 380, avg=158.62, stdev=116.48, samples=13 00:25:00.306 lat (msec) : 50=0.09%, 500=35.43%, 750=9.74%, 1000=0.34%, 2000=35.52% 00:25:00.306 lat (msec) : >=2000=18.88% 00:25:00.306 cpu : usr=0.03%, sys=2.28%, ctx=1902, majf=0, minf=32769 00:25:00.306 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:25:00.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.306 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.306 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.306 job0: (groupid=0, jobs=1): err= 0: pid=3693005: Mon Jun 10 11:33:27 2024 00:25:00.306 read: IOPS=15, BW=15.6MiB/s (16.4MB/s)(168MiB/10738msec) 00:25:00.306 slat (usec): min=407, max=2109.8k, avg=63684.23, stdev=315605.11 00:25:00.306 clat (msec): min=37, max=10626, avg=7532.90, stdev=3130.77 00:25:00.306 lat (msec): min=1899, max=10631, avg=7596.59, stdev=3081.62 00:25:00.306 clat percentiles (msec): 00:25:00.306 | 1.00th=[ 1905], 5.00th=[ 1955], 10.00th=[ 1989], 20.00th=[ 2198], 00:25:00.306 | 30.00th=[ 8221], 40.00th=[ 8658], 50.00th=[ 8926], 60.00th=[ 9194], 00:25:00.306 | 70.00th=[ 9597], 80.00th=[ 9866], 90.00th=[10134], 95.00th=[10268], 00:25:00.306 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:00.306 | 99.99th=[10671] 00:25:00.306 bw ( KiB/s): min= 2048, max=43008, per=0.37%, avg=16384.00, stdev=15995.39, samples=5 00:25:00.306 iops : min= 2, max= 42, avg=16.00, stdev=15.62, samples=5 00:25:00.306 lat (msec) : 50=0.60%, 2000=10.71%, >=2000=88.69% 00:25:00.306 cpu : usr=0.00%, sys=1.12%, ctx=396, majf=0, minf=32769 00:25:00.306 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.5%, 32=19.0%, >=64=62.5% 00:25:00.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.306 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:25:00.306 issued rwts: total=168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.306 job0: (groupid=0, jobs=1): err= 0: pid=3693006: Mon Jun 10 11:33:27 2024 00:25:00.306 read: IOPS=167, BW=167MiB/s (175MB/s)(1688MiB/10096msec) 00:25:00.306 slat (usec): min=31, max=78771, avg=5924.01, stdev=8011.23 00:25:00.306 clat (msec): min=83, max=2321, avg=683.55, stdev=338.93 00:25:00.306 lat (msec): min=105, max=2323, avg=689.47, stdev=341.57 00:25:00.306 clat percentiles (msec): 00:25:00.306 | 1.00th=[ 203], 5.00th=[ 418], 10.00th=[ 443], 20.00th=[ 518], 00:25:00.306 | 30.00th=[ 531], 40.00th=[ 567], 50.00th=[ 617], 60.00th=[ 634], 00:25:00.306 | 70.00th=[ 735], 80.00th=[ 810], 90.00th=[ 844], 95.00th=[ 1234], 00:25:00.306 | 99.00th=[ 2265], 99.50th=[ 2299], 99.90th=[ 2333], 99.95th=[ 2333], 00:25:00.306 | 99.99th=[ 2333] 00:25:00.306 bw ( KiB/s): min=88064, max=309248, per=4.48%, avg=199613.31, stdev=54860.10, samples=16 00:25:00.306 iops : min= 86, max= 302, avg=194.88, stdev=53.61, samples=16 00:25:00.306 lat (msec) : 100=0.06%, 250=1.72%, 500=14.16%, 750=58.65%, 1000=19.37% 00:25:00.306 lat (msec) : 2000=3.20%, >=2000=2.84% 00:25:00.306 cpu : usr=0.13%, sys=2.83%, ctx=1827, majf=0, minf=32769 00:25:00.306 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:25:00.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.306 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.306 issued rwts: total=1688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.306 job0: (groupid=0, jobs=1): err= 0: pid=3693007: Mon Jun 10 11:33:27 2024 00:25:00.306 read: IOPS=1, BW=1267KiB/s (1297kB/s)(13.0MiB/10507msec) 00:25:00.306 slat (usec): min=769, max=4225.5k, avg=804047.18, stdev=1344779.09 00:25:00.306 clat (msec): min=53, max=10503, avg=7345.18, stdev=3237.01 00:25:00.306 lat (msec): min=4279, max=10506, avg=8149.23, stdev=2485.99 00:25:00.306 clat percentiles (msec): 00:25:00.306 | 1.00th=[ 54], 5.00th=[ 54], 10.00th=[ 4279], 20.00th=[ 4329], 00:25:00.306 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8658], 00:25:00.306 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:25:00.306 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:25:00.306 | 99.99th=[10537] 00:25:00.306 lat (msec) : 100=7.69%, >=2000=92.31% 00:25:00.306 cpu : usr=0.00%, sys=0.10%, ctx=69, majf=0, minf=3329 00:25:00.306 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:00.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.306 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.306 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.306 job0: (groupid=0, jobs=1): err= 0: pid=3693009: Mon Jun 10 11:33:27 2024 00:25:00.306 read: IOPS=1, BW=1864KiB/s (1909kB/s)(19.0MiB/10437msec) 00:25:00.306 slat (usec): min=1440, max=2115.5k, avg=547132.81, stdev=919915.33 00:25:00.306 clat (msec): min=41, max=10415, avg=6627.41, stdev=2955.00 00:25:00.306 lat (msec): min=2133, max=10436, avg=7174.55, stdev=2610.05 00:25:00.306 clat percentiles (msec): 00:25:00.306 | 1.00th=[ 42], 5.00th=[ 42], 10.00th=[ 2140], 20.00th=[ 4245], 00:25:00.306 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8557], 00:25:00.306 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10402], 95.00th=[10402], 00:25:00.306 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:00.306 | 99.99th=[10402] 00:25:00.306 lat (msec) : 50=5.26%, >=2000=94.74% 00:25:00.306 cpu : usr=0.00%, sys=0.07%, ctx=90, majf=0, minf=4865 00:25:00.306 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:25:00.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.306 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:00.306 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.306 job0: (groupid=0, jobs=1): err= 0: pid=3693010: Mon Jun 10 11:33:27 2024 00:25:00.306 read: IOPS=11, BW=11.4MiB/s (12.0MB/s)(120MiB/10528msec) 00:25:00.306 slat (usec): min=789, max=2130.2k, avg=87280.71, stdev=377337.24 00:25:00.306 clat (msec): min=53, max=10514, avg=8918.68, stdev=2130.98 00:25:00.306 lat (msec): min=2129, max=10527, avg=9005.96, stdev=1973.48 00:25:00.306 clat percentiles (msec): 00:25:00.306 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 6409], 20.00th=[ 8926], 00:25:00.306 | 30.00th=[ 9194], 40.00th=[ 9329], 50.00th=[ 9597], 60.00th=[ 9731], 00:25:00.306 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10402], 95.00th=[10402], 00:25:00.306 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:25:00.306 | 99.99th=[10537] 00:25:00.306 lat (msec) : 100=0.83%, >=2000=99.17% 00:25:00.306 cpu : usr=0.00%, sys=0.73%, ctx=286, majf=0, minf=30721 00:25:00.306 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.7%, 16=13.3%, 32=26.7%, >=64=47.5% 00:25:00.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.306 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:00.306 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.306 job0: (groupid=0, jobs=1): err= 0: pid=3693011: Mon Jun 10 11:33:27 2024 00:25:00.306 read: IOPS=95, BW=95.9MiB/s (101MB/s)(1033MiB/10776msec) 00:25:00.306 slat (usec): min=26, max=2090.5k, avg=10388.61, stdev=67772.53 00:25:00.306 clat (msec): min=37, max=3698, avg=1234.79, stdev=819.32 00:25:00.306 lat (msec): min=426, max=3724, avg=1245.18, stdev=821.59 00:25:00.306 clat percentiles (msec): 00:25:00.306 | 1.00th=[ 426], 5.00th=[ 430], 10.00th=[ 439], 20.00th=[ 447], 00:25:00.306 | 30.00th=[ 659], 40.00th=[ 927], 50.00th=[ 1083], 60.00th=[ 1116], 00:25:00.306 | 70.00th=[ 1368], 80.00th=[ 1737], 90.00th=[ 2802], 95.00th=[ 3004], 00:25:00.306 | 99.00th=[ 3574], 99.50th=[ 3641], 99.90th=[ 3708], 99.95th=[ 3708], 00:25:00.306 | 99.99th=[ 3708] 00:25:00.306 bw ( KiB/s): min= 4096, max=296960, per=2.77%, avg=123546.80, stdev=89139.08, samples=15 00:25:00.306 iops : min= 4, max= 290, avg=120.60, stdev=87.05, samples=15 00:25:00.306 lat (msec) : 50=0.10%, 500=22.17%, 750=11.04%, 1000=10.55%, 2000=38.14% 00:25:00.306 lat (msec) : >=2000=18.01% 00:25:00.306 cpu : usr=0.06%, sys=2.34%, ctx=2027, majf=0, minf=32207 00:25:00.306 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:25:00.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.306 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.306 issued rwts: total=1033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.306 job0: (groupid=0, jobs=1): err= 0: pid=3693012: Mon Jun 10 11:33:27 2024 00:25:00.306 read: IOPS=35, BW=35.2MiB/s (36.9MB/s)(369MiB/10479msec) 00:25:00.306 slat (usec): min=606, max=2150.6k, avg=28247.54, stdev=216352.40 00:25:00.306 clat (msec): min=52, max=9265, avg=3493.94, stdev=3701.08 00:25:00.306 lat (msec): min=614, max=9266, avg=3522.19, stdev=3706.84 00:25:00.306 clat percentiles (msec): 00:25:00.306 | 1.00th=[ 617], 5.00th=[ 617], 10.00th=[ 617], 20.00th=[ 634], 00:25:00.306 | 30.00th=[ 634], 40.00th=[ 667], 50.00th=[ 726], 60.00th=[ 2165], 00:25:00.306 | 70.00th=[ 7148], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9194], 00:25:00.306 | 99.00th=[ 9194], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:25:00.306 | 99.99th=[ 9329] 00:25:00.306 bw ( KiB/s): min= 4096, max=178176, per=1.58%, avg=70504.43, stdev=75093.18, samples=7 00:25:00.306 iops : min= 4, max= 174, avg=68.71, stdev=73.45, samples=7 00:25:00.307 lat (msec) : 100=0.27%, 750=53.66%, 1000=5.15%, >=2000=40.92% 00:25:00.307 cpu : usr=0.03%, sys=1.01%, ctx=674, majf=0, minf=32769 00:25:00.307 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.7%, >=64=82.9% 00:25:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.307 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.307 issued rwts: total=369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.307 job0: (groupid=0, jobs=1): err= 0: pid=3693013: Mon Jun 10 11:33:27 2024 00:25:00.307 read: IOPS=33, BW=33.5MiB/s (35.1MB/s)(350MiB/10442msec) 00:25:00.307 slat (usec): min=567, max=2147.3k, avg=29687.44, stdev=222908.22 00:25:00.307 clat (msec): min=48, max=9302, avg=3657.34, stdev=3818.24 00:25:00.307 lat (msec): min=609, max=9304, avg=3687.03, stdev=3822.89 00:25:00.307 clat percentiles (msec): 00:25:00.307 | 1.00th=[ 609], 5.00th=[ 609], 10.00th=[ 617], 20.00th=[ 617], 00:25:00.307 | 30.00th=[ 634], 40.00th=[ 676], 50.00th=[ 726], 60.00th=[ 2198], 00:25:00.307 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9194], 00:25:00.307 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:25:00.307 | 99.99th=[ 9329] 00:25:00.307 bw ( KiB/s): min= 4096, max=174080, per=1.46%, avg=64950.86, stdev=72424.28, samples=7 00:25:00.307 iops : min= 4, max= 170, avg=63.43, stdev=70.73, samples=7 00:25:00.307 lat (msec) : 50=0.29%, 750=54.57%, 1000=3.71%, >=2000=41.43% 00:25:00.307 cpu : usr=0.03%, sys=0.91%, ctx=658, majf=0, minf=32769 00:25:00.307 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.1%, >=64=82.0% 00:25:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.307 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.307 issued rwts: total=350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.307 job0: (groupid=0, jobs=1): err= 0: pid=3693014: Mon Jun 10 11:33:27 2024 00:25:00.307 read: IOPS=2, BW=2415KiB/s (2473kB/s)(25.0MiB/10601msec) 00:25:00.307 slat (usec): min=688, max=2128.9k, avg=422446.46, stdev=843096.84 00:25:00.307 clat (msec): min=39, max=10565, avg=6542.21, stdev=3481.57 00:25:00.307 lat (msec): min=2116, max=10600, avg=6964.66, stdev=3295.41 00:25:00.307 clat percentiles (msec): 00:25:00.307 | 1.00th=[ 40], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2165], 00:25:00.307 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 6409], 00:25:00.307 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:25:00.307 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:25:00.307 | 99.99th=[10537] 00:25:00.307 lat (msec) : 50=4.00%, >=2000=96.00% 00:25:00.307 cpu : usr=0.01%, sys=0.23%, ctx=57, majf=0, minf=6401 00:25:00.307 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:25:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.307 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:00.307 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.307 job0: (groupid=0, jobs=1): err= 0: pid=3693016: Mon Jun 10 11:33:27 2024 00:25:00.307 read: IOPS=2, BW=2241KiB/s (2295kB/s)(23.0MiB/10509msec) 00:25:00.307 slat (usec): min=684, max=3949.2k, avg=454573.43, stdev=1038639.06 00:25:00.307 clat (msec): min=52, max=10506, avg=5513.78, stdev=3942.83 00:25:00.307 lat (msec): min=2104, max=10508, avg=5968.36, stdev=3886.93 00:25:00.307 clat percentiles (msec): 00:25:00.307 | 1.00th=[ 53], 5.00th=[ 2106], 10.00th=[ 2106], 20.00th=[ 2106], 00:25:00.307 | 30.00th=[ 2198], 40.00th=[ 2198], 50.00th=[ 4279], 60.00th=[ 6409], 00:25:00.307 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:25:00.307 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:25:00.307 | 99.99th=[10537] 00:25:00.307 lat (msec) : 100=4.35%, >=2000=95.65% 00:25:00.307 cpu : usr=0.00%, sys=0.12%, ctx=65, majf=0, minf=5889 00:25:00.307 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:25:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.307 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:00.307 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.307 job0: (groupid=0, jobs=1): err= 0: pid=3693017: Mon Jun 10 11:33:27 2024 00:25:00.307 read: IOPS=13, BW=13.7MiB/s (14.3MB/s)(146MiB/10683msec) 00:25:00.307 slat (usec): min=655, max=2187.6k, avg=72848.35, stdev=345306.29 00:25:00.307 clat (msec): min=45, max=10408, avg=8578.66, stdev=2566.98 00:25:00.307 lat (msec): min=1628, max=10412, avg=8651.51, stdev=2466.86 00:25:00.307 clat percentiles (msec): 00:25:00.307 | 1.00th=[ 1620], 5.00th=[ 1653], 10.00th=[ 4111], 20.00th=[ 8926], 00:25:00.307 | 30.00th=[ 9060], 40.00th=[ 9329], 50.00th=[ 9463], 60.00th=[ 9731], 00:25:00.307 | 70.00th=[ 9866], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:25:00.307 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:00.307 | 99.99th=[10402] 00:25:00.307 bw ( KiB/s): min= 2048, max=20480, per=0.17%, avg=7372.80, stdev=7744.58, samples=5 00:25:00.307 iops : min= 2, max= 20, avg= 7.20, stdev= 7.56, samples=5 00:25:00.307 lat (msec) : 50=0.68%, 2000=7.53%, >=2000=91.78% 00:25:00.307 cpu : usr=0.00%, sys=0.99%, ctx=348, majf=0, minf=32769 00:25:00.307 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.5%, 16=11.0%, 32=21.9%, >=64=56.8% 00:25:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.307 complete : 0=0.0%, 4=95.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.0% 00:25:00.307 issued rwts: total=146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.307 job1: (groupid=0, jobs=1): err= 0: pid=3693033: Mon Jun 10 11:33:27 2024 00:25:00.307 read: IOPS=3, BW=3640KiB/s (3728kB/s)(38.0MiB/10689msec) 00:25:00.307 slat (usec): min=1481, max=2122.5k, avg=280080.39, stdev=699104.39 00:25:00.307 clat (msec): min=45, max=10687, avg=7958.49, stdev=3367.06 00:25:00.307 lat (msec): min=2115, max=10688, avg=8238.57, stdev=3125.00 00:25:00.307 clat percentiles (msec): 00:25:00.307 | 1.00th=[ 46], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 4279], 00:25:00.307 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10537], 60.00th=[10537], 00:25:00.307 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:00.307 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:00.307 | 99.99th=[10671] 00:25:00.307 lat (msec) : 50=2.63%, >=2000=97.37% 00:25:00.307 cpu : usr=0.02%, sys=0.45%, ctx=88, majf=0, minf=9729 00:25:00.307 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:25:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.307 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:00.307 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.307 job1: (groupid=0, jobs=1): err= 0: pid=3693034: Mon Jun 10 11:33:27 2024 00:25:00.307 read: IOPS=29, BW=29.2MiB/s (30.7MB/s)(305MiB/10433msec) 00:25:00.307 slat (usec): min=32, max=2111.5k, avg=32847.52, stdev=187284.81 00:25:00.307 clat (msec): min=412, max=7697, avg=3922.71, stdev=3014.75 00:25:00.307 lat (msec): min=435, max=7702, avg=3955.56, stdev=3019.94 00:25:00.307 clat percentiles (msec): 00:25:00.307 | 1.00th=[ 439], 5.00th=[ 531], 10.00th=[ 642], 20.00th=[ 1036], 00:25:00.307 | 30.00th=[ 1552], 40.00th=[ 1854], 50.00th=[ 1905], 60.00th=[ 5403], 00:25:00.307 | 70.00th=[ 7483], 80.00th=[ 7550], 90.00th=[ 7617], 95.00th=[ 7617], 00:25:00.307 | 99.00th=[ 7684], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:25:00.307 | 99.99th=[ 7684] 00:25:00.307 bw ( KiB/s): min= 6144, max=72132, per=0.83%, avg=36914.22, stdev=28502.79, samples=9 00:25:00.307 iops : min= 6, max= 70, avg=36.00, stdev=27.77, samples=9 00:25:00.307 lat (msec) : 500=3.61%, 750=9.51%, 1000=5.90%, 2000=35.08%, >=2000=45.90% 00:25:00.307 cpu : usr=0.01%, sys=0.68%, ctx=663, majf=0, minf=32769 00:25:00.307 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.3% 00:25:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.307 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:00.307 issued rwts: total=305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.307 job1: (groupid=0, jobs=1): err= 0: pid=3693035: Mon Jun 10 11:33:27 2024 00:25:00.307 read: IOPS=10, BW=10.1MiB/s (10.6MB/s)(106MiB/10445msec) 00:25:00.307 slat (usec): min=232, max=2153.8k, avg=98142.85, stdev=420220.79 00:25:00.307 clat (msec): min=41, max=10441, avg=8461.17, stdev=1425.20 00:25:00.307 lat (msec): min=2131, max=10444, avg=8559.31, stdev=1176.34 00:25:00.307 clat percentiles (msec): 00:25:00.307 | 1.00th=[ 2140], 5.00th=[ 8154], 10.00th=[ 8221], 20.00th=[ 8221], 00:25:00.307 | 30.00th=[ 8288], 40.00th=[ 8356], 50.00th=[ 8423], 60.00th=[ 8490], 00:25:00.307 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[10402], 95.00th=[10402], 00:25:00.307 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:00.307 | 99.99th=[10402] 00:25:00.307 lat (msec) : 50=0.94%, >=2000=99.06% 00:25:00.307 cpu : usr=0.00%, sys=0.56%, ctx=191, majf=0, minf=27137 00:25:00.307 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:25:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.307 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:00.307 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.307 job1: (groupid=0, jobs=1): err= 0: pid=3693036: Mon Jun 10 11:33:27 2024 00:25:00.307 read: IOPS=359, BW=359MiB/s (377MB/s)(3767MiB/10491msec) 00:25:00.307 slat (usec): min=24, max=2137.3k, avg=2766.96, stdev=45515.75 00:25:00.307 clat (msec): min=53, max=2528, avg=346.73, stdev=542.34 00:25:00.307 lat (msec): min=102, max=2723, avg=349.50, stdev=544.60 00:25:00.307 clat percentiles (msec): 00:25:00.307 | 1.00th=[ 103], 5.00th=[ 104], 10.00th=[ 104], 20.00th=[ 105], 00:25:00.307 | 30.00th=[ 108], 40.00th=[ 167], 50.00th=[ 205], 60.00th=[ 207], 00:25:00.307 | 70.00th=[ 228], 80.00th=[ 309], 90.00th=[ 567], 95.00th=[ 2232], 00:25:00.307 | 99.00th=[ 2467], 99.50th=[ 2500], 99.90th=[ 2500], 99.95th=[ 2500], 00:25:00.307 | 99.99th=[ 2534] 00:25:00.307 bw ( KiB/s): min=112640, max=1251328, per=11.95%, avg=532225.93, stdev=352953.53, samples=14 00:25:00.307 iops : min= 110, max= 1222, avg=519.71, stdev=344.70, samples=14 00:25:00.307 lat (msec) : 100=0.03%, 250=72.18%, 500=17.52%, 750=1.65%, 1000=1.88% 00:25:00.308 lat (msec) : >=2000=6.74% 00:25:00.308 cpu : usr=0.17%, sys=2.54%, ctx=3727, majf=0, minf=32769 00:25:00.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:00.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.308 issued rwts: total=3767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.308 job1: (groupid=0, jobs=1): err= 0: pid=3693037: Mon Jun 10 11:33:27 2024 00:25:00.308 read: IOPS=31, BW=31.3MiB/s (32.8MB/s)(327MiB/10439msec) 00:25:00.308 slat (usec): min=25, max=2106.8k, avg=31778.10, stdev=172631.04 00:25:00.308 clat (msec): min=45, max=4970, avg=3005.94, stdev=1539.37 00:25:00.308 lat (msec): min=1040, max=4987, avg=3037.72, stdev=1531.77 00:25:00.308 clat percentiles (msec): 00:25:00.308 | 1.00th=[ 1036], 5.00th=[ 1053], 10.00th=[ 1053], 20.00th=[ 1150], 00:25:00.308 | 30.00th=[ 1351], 40.00th=[ 1821], 50.00th=[ 3608], 60.00th=[ 4144], 00:25:00.308 | 70.00th=[ 4396], 80.00th=[ 4530], 90.00th=[ 4732], 95.00th=[ 4799], 00:25:00.308 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 5000], 99.95th=[ 5000], 00:25:00.308 | 99.99th=[ 5000] 00:25:00.308 bw ( KiB/s): min=10240, max=126976, per=1.31%, avg=58221.71, stdev=42498.79, samples=7 00:25:00.308 iops : min= 10, max= 124, avg=56.86, stdev=41.50, samples=7 00:25:00.308 lat (msec) : 50=0.31%, 2000=39.76%, >=2000=59.94% 00:25:00.308 cpu : usr=0.05%, sys=0.89%, ctx=669, majf=0, minf=32769 00:25:00.308 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.7% 00:25:00.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.308 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:00.308 issued rwts: total=327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.308 job1: (groupid=0, jobs=1): err= 0: pid=3693039: Mon Jun 10 11:33:27 2024 00:25:00.308 read: IOPS=7, BW=7811KiB/s (7998kB/s)(81.0MiB/10619msec) 00:25:00.308 slat (usec): min=650, max=2082.2k, avg=130249.92, stdev=475431.72 00:25:00.308 clat (msec): min=67, max=10612, avg=8524.55, stdev=2946.00 00:25:00.308 lat (msec): min=2145, max=10617, avg=8654.80, stdev=2796.87 00:25:00.308 clat percentiles (msec): 00:25:00.308 | 1.00th=[ 68], 5.00th=[ 2198], 10.00th=[ 4111], 20.00th=[ 4329], 00:25:00.308 | 30.00th=[ 8557], 40.00th=[10402], 50.00th=[10402], 60.00th=[10537], 00:25:00.308 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:25:00.308 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:00.308 | 99.99th=[10671] 00:25:00.308 lat (msec) : 100=1.23%, >=2000=98.77% 00:25:00.308 cpu : usr=0.00%, sys=0.82%, ctx=136, majf=0, minf=20737 00:25:00.308 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:25:00.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.308 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:00.308 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.308 job1: (groupid=0, jobs=1): err= 0: pid=3693040: Mon Jun 10 11:33:27 2024 00:25:00.308 read: IOPS=25, BW=25.1MiB/s (26.3MB/s)(264MiB/10523msec) 00:25:00.308 slat (usec): min=610, max=2098.0k, avg=39626.31, stdev=205533.28 00:25:00.308 clat (msec): min=59, max=8717, avg=4614.95, stdev=1775.51 00:25:00.308 lat (msec): min=1528, max=8732, avg=4654.57, stdev=1770.74 00:25:00.308 clat percentiles (msec): 00:25:00.308 | 1.00th=[ 1519], 5.00th=[ 1603], 10.00th=[ 2165], 20.00th=[ 3071], 00:25:00.308 | 30.00th=[ 3641], 40.00th=[ 3842], 50.00th=[ 4279], 60.00th=[ 5604], 00:25:00.308 | 70.00th=[ 5805], 80.00th=[ 6007], 90.00th=[ 6275], 95.00th=[ 8557], 00:25:00.308 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:25:00.308 | 99.99th=[ 8658] 00:25:00.308 bw ( KiB/s): min=10240, max=51200, per=0.78%, avg=34816.00, stdev=16273.92, samples=8 00:25:00.308 iops : min= 10, max= 50, avg=34.00, stdev=15.89, samples=8 00:25:00.308 lat (msec) : 100=0.38%, 2000=8.71%, >=2000=90.91% 00:25:00.308 cpu : usr=0.00%, sys=0.64%, ctx=675, majf=0, minf=32769 00:25:00.308 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.1%, 32=12.1%, >=64=76.1% 00:25:00.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.308 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:25:00.308 issued rwts: total=264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.308 job1: (groupid=0, jobs=1): err= 0: pid=3693041: Mon Jun 10 11:33:27 2024 00:25:00.308 read: IOPS=3, BW=3829KiB/s (3921kB/s)(40.0MiB/10698msec) 00:25:00.308 slat (usec): min=751, max=2153.8k, avg=266396.19, stdev=698721.34 00:25:00.308 clat (msec): min=41, max=10696, avg=9623.33, stdev=2532.45 00:25:00.308 lat (msec): min=2131, max=10697, avg=9889.73, stdev=2003.97 00:25:00.308 clat percentiles (msec): 00:25:00.308 | 1.00th=[ 42], 5.00th=[ 2140], 10.00th=[ 4329], 20.00th=[10537], 00:25:00.308 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10671], 60.00th=[10671], 00:25:00.308 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:00.308 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:00.308 | 99.99th=[10671] 00:25:00.308 lat (msec) : 50=2.50%, >=2000=97.50% 00:25:00.308 cpu : usr=0.01%, sys=0.66%, ctx=73, majf=0, minf=10241 00:25:00.308 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:25:00.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.308 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:00.308 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.308 job1: (groupid=0, jobs=1): err= 0: pid=3693042: Mon Jun 10 11:33:27 2024 00:25:00.308 read: IOPS=37, BW=37.3MiB/s (39.1MB/s)(401MiB/10753msec) 00:25:00.308 slat (usec): min=42, max=2126.9k, avg=26676.03, stdev=151014.67 00:25:00.308 clat (msec): min=53, max=5506, avg=3244.48, stdev=1114.35 00:25:00.308 lat (msec): min=1532, max=5509, avg=3271.16, stdev=1112.11 00:25:00.308 clat percentiles (msec): 00:25:00.308 | 1.00th=[ 1536], 5.00th=[ 1569], 10.00th=[ 1569], 20.00th=[ 2333], 00:25:00.308 | 30.00th=[ 2567], 40.00th=[ 2802], 50.00th=[ 3037], 60.00th=[ 3842], 00:25:00.308 | 70.00th=[ 4144], 80.00th=[ 4329], 90.00th=[ 4597], 95.00th=[ 5000], 00:25:00.308 | 99.00th=[ 5403], 99.50th=[ 5470], 99.90th=[ 5537], 99.95th=[ 5537], 00:25:00.308 | 99.99th=[ 5537] 00:25:00.308 bw ( KiB/s): min= 4096, max=129024, per=1.05%, avg=46592.00, stdev=37122.97, samples=12 00:25:00.308 iops : min= 4, max= 126, avg=45.50, stdev=36.25, samples=12 00:25:00.308 lat (msec) : 100=0.25%, 2000=16.21%, >=2000=83.54% 00:25:00.308 cpu : usr=0.01%, sys=1.44%, ctx=986, majf=0, minf=32769 00:25:00.308 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:25:00.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.308 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.308 issued rwts: total=401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.308 job1: (groupid=0, jobs=1): err= 0: pid=3693043: Mon Jun 10 11:33:27 2024 00:25:00.308 read: IOPS=83, BW=83.3MiB/s (87.3MB/s)(889MiB/10677msec) 00:25:00.308 slat (usec): min=32, max=2098.6k, avg=11926.27, stdev=98317.96 00:25:00.308 clat (msec): min=68, max=4893, avg=1419.50, stdev=1349.90 00:25:00.308 lat (msec): min=516, max=4902, avg=1431.43, stdev=1352.64 00:25:00.308 clat percentiles (msec): 00:25:00.308 | 1.00th=[ 518], 5.00th=[ 527], 10.00th=[ 550], 20.00th=[ 600], 00:25:00.308 | 30.00th=[ 625], 40.00th=[ 642], 50.00th=[ 818], 60.00th=[ 852], 00:25:00.308 | 70.00th=[ 1150], 80.00th=[ 2072], 90.00th=[ 4463], 95.00th=[ 4665], 00:25:00.308 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:25:00.308 | 99.99th=[ 4866] 00:25:00.308 bw ( KiB/s): min= 2048, max=253952, per=2.92%, avg=129844.25, stdev=84824.00, samples=12 00:25:00.308 iops : min= 2, max= 248, avg=126.75, stdev=82.79, samples=12 00:25:00.308 lat (msec) : 100=0.11%, 750=47.36%, 1000=19.80%, 2000=11.36%, >=2000=21.37% 00:25:00.308 cpu : usr=0.04%, sys=1.87%, ctx=1092, majf=0, minf=32769 00:25:00.308 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:25:00.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.308 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.308 issued rwts: total=889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.308 job1: (groupid=0, jobs=1): err= 0: pid=3693044: Mon Jun 10 11:33:27 2024 00:25:00.308 read: IOPS=13, BW=13.0MiB/s (13.7MB/s)(137MiB/10523msec) 00:25:00.308 slat (usec): min=294, max=2111.9k, avg=76471.27, stdev=352080.14 00:25:00.308 clat (msec): min=45, max=10370, avg=8599.60, stdev=1935.90 00:25:00.308 lat (msec): min=2115, max=10389, avg=8676.07, stdev=1795.86 00:25:00.308 clat percentiles (msec): 00:25:00.308 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 6208], 20.00th=[ 8288], 00:25:00.308 | 30.00th=[ 8792], 40.00th=[ 8926], 50.00th=[ 9194], 60.00th=[ 9329], 00:25:00.308 | 70.00th=[ 9597], 80.00th=[ 9866], 90.00th=[10134], 95.00th=[10268], 00:25:00.308 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:00.308 | 99.99th=[10402] 00:25:00.308 bw ( KiB/s): min= 8192, max=10240, per=0.21%, avg=9216.00, stdev=1448.15, samples=2 00:25:00.308 iops : min= 8, max= 10, avg= 9.00, stdev= 1.41, samples=2 00:25:00.308 lat (msec) : 50=0.73%, >=2000=99.27% 00:25:00.308 cpu : usr=0.03%, sys=0.71%, ctx=351, majf=0, minf=32769 00:25:00.308 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.8%, 16=11.7%, 32=23.4%, >=64=54.0% 00:25:00.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.308 complete : 0=0.0%, 4=90.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=9.1% 00:25:00.308 issued rwts: total=137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.308 job1: (groupid=0, jobs=1): err= 0: pid=3693045: Mon Jun 10 11:33:27 2024 00:25:00.308 read: IOPS=45, BW=45.2MiB/s (47.4MB/s)(475MiB/10501msec) 00:25:00.308 slat (usec): min=32, max=2107.6k, avg=21989.07, stdev=160352.61 00:25:00.308 clat (msec): min=53, max=8953, avg=2722.85, stdev=3262.21 00:25:00.308 lat (msec): min=531, max=8954, avg=2744.84, stdev=3271.06 00:25:00.308 clat percentiles (msec): 00:25:00.308 | 1.00th=[ 531], 5.00th=[ 535], 10.00th=[ 535], 20.00th=[ 542], 00:25:00.308 | 30.00th=[ 542], 40.00th=[ 542], 50.00th=[ 567], 60.00th=[ 609], 00:25:00.309 | 70.00th=[ 2769], 80.00th=[ 7148], 90.00th=[ 8792], 95.00th=[ 8926], 00:25:00.309 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:25:00.309 | 99.99th=[ 8926] 00:25:00.309 bw ( KiB/s): min= 4096, max=258048, per=1.77%, avg=78922.22, stdev=94217.48, samples=9 00:25:00.309 iops : min= 4, max= 252, avg=77.00, stdev=91.92, samples=9 00:25:00.309 lat (msec) : 100=0.21%, 750=61.47%, 2000=5.89%, >=2000=32.42% 00:25:00.309 cpu : usr=0.04%, sys=1.12%, ctx=626, majf=0, minf=32769 00:25:00.309 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.7% 00:25:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.309 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:00.309 issued rwts: total=475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.309 job1: (groupid=0, jobs=1): err= 0: pid=3693046: Mon Jun 10 11:33:27 2024 00:25:00.309 read: IOPS=33, BW=33.3MiB/s (34.9MB/s)(337MiB/10134msec) 00:25:00.309 slat (usec): min=34, max=2075.7k, avg=29721.94, stdev=146829.68 00:25:00.309 clat (msec): min=115, max=6607, avg=2440.36, stdev=1475.85 00:25:00.309 lat (msec): min=169, max=6644, avg=2470.08, stdev=1495.86 00:25:00.309 clat percentiles (msec): 00:25:00.309 | 1.00th=[ 176], 5.00th=[ 296], 10.00th=[ 667], 20.00th=[ 1552], 00:25:00.309 | 30.00th=[ 1989], 40.00th=[ 2165], 50.00th=[ 2333], 60.00th=[ 2433], 00:25:00.309 | 70.00th=[ 2467], 80.00th=[ 2668], 90.00th=[ 5805], 95.00th=[ 6007], 00:25:00.309 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6611], 99.95th=[ 6611], 00:25:00.309 | 99.99th=[ 6611] 00:25:00.309 bw ( KiB/s): min=28672, max=71680, per=1.07%, avg=47559.11, stdev=15454.52, samples=9 00:25:00.309 iops : min= 28, max= 70, avg=46.44, stdev=15.09, samples=9 00:25:00.309 lat (msec) : 250=3.56%, 500=4.15%, 750=3.56%, 1000=2.97%, 2000=16.91% 00:25:00.309 lat (msec) : >=2000=68.84% 00:25:00.309 cpu : usr=0.03%, sys=1.18%, ctx=866, majf=0, minf=32769 00:25:00.309 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.5%, >=64=81.3% 00:25:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.309 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:25:00.309 issued rwts: total=337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.309 job2: (groupid=0, jobs=1): err= 0: pid=3693054: Mon Jun 10 11:33:27 2024 00:25:00.309 read: IOPS=48, BW=48.8MiB/s (51.2MB/s)(495MiB/10134msec) 00:25:00.309 slat (usec): min=27, max=2090.7k, avg=20231.94, stdev=122470.36 00:25:00.309 clat (msec): min=116, max=5302, avg=1980.02, stdev=1322.13 00:25:00.309 lat (msec): min=134, max=5315, avg=2000.25, stdev=1328.04 00:25:00.309 clat percentiles (msec): 00:25:00.309 | 1.00th=[ 142], 5.00th=[ 268], 10.00th=[ 493], 20.00th=[ 1284], 00:25:00.309 | 30.00th=[ 1368], 40.00th=[ 1703], 50.00th=[ 1854], 60.00th=[ 1938], 00:25:00.309 | 70.00th=[ 1989], 80.00th=[ 2039], 90.00th=[ 5134], 95.00th=[ 5201], 00:25:00.309 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5336], 00:25:00.309 | 99.99th=[ 5336] 00:25:00.309 bw ( KiB/s): min=47104, max=120944, per=1.68%, avg=74988.90, stdev=22432.64, samples=10 00:25:00.309 iops : min= 46, max= 118, avg=73.20, stdev=21.85, samples=10 00:25:00.309 lat (msec) : 250=3.64%, 500=7.07%, 750=2.42%, 1000=3.03%, 2000=56.16% 00:25:00.309 lat (msec) : >=2000=27.68% 00:25:00.309 cpu : usr=0.02%, sys=1.59%, ctx=1056, majf=0, minf=32769 00:25:00.309 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.3% 00:25:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.309 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:00.309 issued rwts: total=495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.309 job2: (groupid=0, jobs=1): err= 0: pid=3693055: Mon Jun 10 11:33:27 2024 00:25:00.309 read: IOPS=46, BW=46.3MiB/s (48.6MB/s)(465MiB/10037msec) 00:25:00.309 slat (usec): min=28, max=2097.6k, avg=21521.09, stdev=126109.28 00:25:00.309 clat (msec): min=26, max=6419, avg=1716.70, stdev=1344.99 00:25:00.309 lat (msec): min=36, max=6484, avg=1738.22, stdev=1361.19 00:25:00.309 clat percentiles (msec): 00:25:00.309 | 1.00th=[ 46], 5.00th=[ 133], 10.00th=[ 409], 20.00th=[ 927], 00:25:00.309 | 30.00th=[ 1028], 40.00th=[ 1183], 50.00th=[ 1586], 60.00th=[ 1770], 00:25:00.309 | 70.00th=[ 1989], 80.00th=[ 2072], 90.00th=[ 2467], 95.00th=[ 6007], 00:25:00.309 | 99.00th=[ 6342], 99.50th=[ 6409], 99.90th=[ 6409], 99.95th=[ 6409], 00:25:00.309 | 99.99th=[ 6409] 00:25:00.309 bw ( KiB/s): min= 8192, max=147456, per=1.62%, avg=71936.00, stdev=46784.11, samples=8 00:25:00.309 iops : min= 8, max= 144, avg=70.25, stdev=45.69, samples=8 00:25:00.309 lat (msec) : 50=1.51%, 100=2.58%, 250=3.23%, 500=4.09%, 750=3.87% 00:25:00.309 lat (msec) : 1000=12.26%, 2000=44.52%, >=2000=27.96% 00:25:00.309 cpu : usr=0.05%, sys=1.15%, ctx=911, majf=0, minf=32769 00:25:00.309 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:25:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.309 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:00.309 issued rwts: total=465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.309 job2: (groupid=0, jobs=1): err= 0: pid=3693056: Mon Jun 10 11:33:27 2024 00:25:00.309 read: IOPS=41, BW=41.8MiB/s (43.9MB/s)(419MiB/10014msec) 00:25:00.309 slat (usec): min=29, max=2020.8k, avg=23864.32, stdev=131811.46 00:25:00.309 clat (msec): min=12, max=5973, avg=1702.30, stdev=980.10 00:25:00.309 lat (msec): min=14, max=5983, avg=1726.16, stdev=1001.10 00:25:00.309 clat percentiles (msec): 00:25:00.309 | 1.00th=[ 19], 5.00th=[ 48], 10.00th=[ 192], 20.00th=[ 1217], 00:25:00.309 | 30.00th=[ 1586], 40.00th=[ 1703], 50.00th=[ 1787], 60.00th=[ 1905], 00:25:00.309 | 70.00th=[ 2005], 80.00th=[ 2072], 90.00th=[ 2123], 95.00th=[ 2165], 00:25:00.309 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 6007], 99.95th=[ 6007], 00:25:00.309 | 99.99th=[ 6007] 00:25:00.309 bw ( KiB/s): min=12288, max=126976, per=1.36%, avg=60672.00, stdev=35386.78, samples=8 00:25:00.309 iops : min= 12, max= 124, avg=59.25, stdev=34.56, samples=8 00:25:00.309 lat (msec) : 20=1.67%, 50=3.34%, 100=3.10%, 250=2.86%, 500=2.15% 00:25:00.309 lat (msec) : 750=1.67%, 1000=2.86%, 2000=52.74%, >=2000=29.59% 00:25:00.309 cpu : usr=0.01%, sys=0.84%, ctx=1302, majf=0, minf=32769 00:25:00.309 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:25:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.309 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:00.309 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.309 job2: (groupid=0, jobs=1): err= 0: pid=3693057: Mon Jun 10 11:33:27 2024 00:25:00.309 read: IOPS=11, BW=12.0MiB/s (12.6MB/s)(128MiB/10690msec) 00:25:00.309 slat (usec): min=655, max=2081.5k, avg=78138.58, stdev=350898.45 00:25:00.309 clat (msec): min=687, max=10688, avg=8005.49, stdev=3639.67 00:25:00.309 lat (msec): min=785, max=10688, avg=8083.63, stdev=3588.32 00:25:00.309 clat percentiles (msec): 00:25:00.309 | 1.00th=[ 785], 5.00th=[ 1368], 10.00th=[ 1586], 20.00th=[ 4212], 00:25:00.309 | 30.00th=[ 6409], 40.00th=[10402], 50.00th=[10537], 60.00th=[10537], 00:25:00.309 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:00.309 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:00.309 | 99.99th=[10671] 00:25:00.309 bw ( KiB/s): min= 2048, max= 2048, per=0.05%, avg=2048.00, stdev= 0.00, samples=1 00:25:00.309 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 00:25:00.309 lat (msec) : 750=0.78%, 1000=1.56%, 2000=12.50%, >=2000=85.16% 00:25:00.309 cpu : usr=0.00%, sys=1.36%, ctx=317, majf=0, minf=32769 00:25:00.309 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.2%, 16=12.5%, 32=25.0%, >=64=50.8% 00:25:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.309 complete : 0=0.0%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=50.0% 00:25:00.309 issued rwts: total=128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.309 job2: (groupid=0, jobs=1): err= 0: pid=3693058: Mon Jun 10 11:33:27 2024 00:25:00.309 read: IOPS=38, BW=38.7MiB/s (40.6MB/s)(393MiB/10152msec) 00:25:00.309 slat (usec): min=132, max=2081.1k, avg=25535.79, stdev=136839.42 00:25:00.309 clat (msec): min=112, max=6760, avg=2231.87, stdev=1729.82 00:25:00.309 lat (msec): min=163, max=6777, avg=2257.40, stdev=1742.16 00:25:00.309 clat percentiles (msec): 00:25:00.309 | 1.00th=[ 176], 5.00th=[ 409], 10.00th=[ 684], 20.00th=[ 1318], 00:25:00.309 | 30.00th=[ 1603], 40.00th=[ 1653], 50.00th=[ 1703], 60.00th=[ 1770], 00:25:00.309 | 70.00th=[ 1955], 80.00th=[ 2567], 90.00th=[ 6409], 95.00th=[ 6611], 00:25:00.309 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:25:00.309 | 99.99th=[ 6745] 00:25:00.309 bw ( KiB/s): min=26624, max=83968, per=1.52%, avg=67716.25, stdev=18501.22, samples=8 00:25:00.309 iops : min= 26, max= 82, avg=66.12, stdev=18.07, samples=8 00:25:00.309 lat (msec) : 250=2.04%, 500=4.07%, 750=4.83%, 1000=4.07%, 2000=56.74% 00:25:00.309 lat (msec) : >=2000=28.24% 00:25:00.309 cpu : usr=0.02%, sys=1.77%, ctx=864, majf=0, minf=32769 00:25:00.309 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:25:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.309 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.309 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.309 job2: (groupid=0, jobs=1): err= 0: pid=3693059: Mon Jun 10 11:33:27 2024 00:25:00.309 read: IOPS=52, BW=52.7MiB/s (55.3MB/s)(533MiB/10114msec) 00:25:00.309 slat (usec): min=42, max=191899, avg=18802.65, stdev=22815.78 00:25:00.309 clat (msec): min=88, max=3551, avg=2195.33, stdev=768.35 00:25:00.309 lat (msec): min=114, max=3557, avg=2214.13, stdev=766.35 00:25:00.309 clat percentiles (msec): 00:25:00.309 | 1.00th=[ 292], 5.00th=[ 659], 10.00th=[ 1167], 20.00th=[ 1569], 00:25:00.309 | 30.00th=[ 1921], 40.00th=[ 2089], 50.00th=[ 2299], 60.00th=[ 2433], 00:25:00.309 | 70.00th=[ 2567], 80.00th=[ 2802], 90.00th=[ 3272], 95.00th=[ 3507], 00:25:00.309 | 99.00th=[ 3540], 99.50th=[ 3540], 99.90th=[ 3540], 99.95th=[ 3540], 00:25:00.309 | 99.99th=[ 3540] 00:25:00.309 bw ( KiB/s): min=16384, max=143360, per=1.24%, avg=55296.00, stdev=35286.10, samples=15 00:25:00.309 iops : min= 16, max= 140, avg=54.00, stdev=34.46, samples=15 00:25:00.309 lat (msec) : 100=0.19%, 250=0.56%, 500=2.44%, 750=3.19%, 1000=1.13% 00:25:00.309 lat (msec) : 2000=27.77%, >=2000=64.73% 00:25:00.309 cpu : usr=0.04%, sys=1.69%, ctx=1793, majf=0, minf=32769 00:25:00.310 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:25:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.310 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.310 issued rwts: total=533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.310 job2: (groupid=0, jobs=1): err= 0: pid=3693060: Mon Jun 10 11:33:27 2024 00:25:00.310 read: IOPS=74, BW=74.6MiB/s (78.3MB/s)(753MiB/10090msec) 00:25:00.310 slat (usec): min=34, max=2108.7k, avg=13273.25, stdev=108533.74 00:25:00.310 clat (msec): min=89, max=5852, avg=1632.68, stdev=1715.25 00:25:00.310 lat (msec): min=90, max=5871, avg=1645.95, stdev=1721.82 00:25:00.310 clat percentiles (msec): 00:25:00.310 | 1.00th=[ 169], 5.00th=[ 351], 10.00th=[ 542], 20.00th=[ 735], 00:25:00.310 | 30.00th=[ 793], 40.00th=[ 844], 50.00th=[ 844], 60.00th=[ 852], 00:25:00.310 | 70.00th=[ 1183], 80.00th=[ 1703], 90.00th=[ 5269], 95.00th=[ 5604], 00:25:00.310 | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:25:00.310 | 99.99th=[ 5873] 00:25:00.310 bw ( KiB/s): min=10240, max=169984, per=2.39%, avg=106509.33, stdev=55842.81, samples=12 00:25:00.310 iops : min= 10, max= 166, avg=104.00, stdev=54.52, samples=12 00:25:00.310 lat (msec) : 100=0.53%, 250=2.12%, 500=6.11%, 750=17.93%, 1000=39.58% 00:25:00.310 lat (msec) : 2000=16.20%, >=2000=17.53% 00:25:00.310 cpu : usr=0.06%, sys=2.03%, ctx=894, majf=0, minf=32769 00:25:00.310 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.6% 00:25:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.310 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.310 issued rwts: total=753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.310 job2: (groupid=0, jobs=1): err= 0: pid=3693062: Mon Jun 10 11:33:27 2024 00:25:00.310 read: IOPS=99, BW=99.2MiB/s (104MB/s)(994MiB/10025msec) 00:25:00.310 slat (usec): min=24, max=2108.5k, avg=10055.93, stdev=94876.56 00:25:00.310 clat (msec): min=24, max=5809, avg=1195.27, stdev=1620.31 00:25:00.310 lat (msec): min=25, max=5824, avg=1205.33, stdev=1627.26 00:25:00.310 clat percentiles (msec): 00:25:00.310 | 1.00th=[ 35], 5.00th=[ 155], 10.00th=[ 279], 20.00th=[ 317], 00:25:00.310 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 338], 60.00th=[ 351], 00:25:00.310 | 70.00th=[ 1116], 80.00th=[ 1653], 90.00th=[ 4866], 95.00th=[ 5403], 00:25:00.310 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:25:00.310 | 99.99th=[ 5805] 00:25:00.310 bw ( KiB/s): min=10240, max=405504, per=2.81%, avg=125228.64, stdev=138636.45, samples=11 00:25:00.310 iops : min= 10, max= 396, avg=122.27, stdev=135.34, samples=11 00:25:00.310 lat (msec) : 50=1.51%, 100=1.61%, 250=6.14%, 500=52.31%, 1000=5.13% 00:25:00.310 lat (msec) : 2000=20.02%, >=2000=13.28% 00:25:00.310 cpu : usr=0.09%, sys=1.40%, ctx=1214, majf=0, minf=32769 00:25:00.310 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:25:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.310 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.310 issued rwts: total=994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.310 job2: (groupid=0, jobs=1): err= 0: pid=3693063: Mon Jun 10 11:33:27 2024 00:25:00.310 read: IOPS=152, BW=152MiB/s (159MB/s)(1612MiB/10599msec) 00:25:00.310 slat (usec): min=24, max=2069.7k, avg=6526.72, stdev=89150.81 00:25:00.310 clat (msec): min=73, max=6594, avg=792.90, stdev=1606.82 00:25:00.310 lat (msec): min=102, max=6595, avg=799.43, stdev=1612.76 00:25:00.310 clat percentiles (msec): 00:25:00.310 | 1.00th=[ 104], 5.00th=[ 138], 10.00th=[ 203], 20.00th=[ 205], 00:25:00.310 | 30.00th=[ 207], 40.00th=[ 209], 50.00th=[ 213], 60.00th=[ 218], 00:25:00.310 | 70.00th=[ 313], 80.00th=[ 414], 90.00th=[ 1368], 95.00th=[ 6544], 00:25:00.310 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:25:00.310 | 99.99th=[ 6611] 00:25:00.310 bw ( KiB/s): min=26624, max=696320, per=7.58%, avg=337692.44, stdev=268825.73, samples=9 00:25:00.310 iops : min= 26, max= 680, avg=329.78, stdev=262.53, samples=9 00:25:00.310 lat (msec) : 100=0.06%, 250=63.21%, 500=20.29%, 750=0.50%, 1000=1.80% 00:25:00.310 lat (msec) : 2000=4.59%, >=2000=9.55% 00:25:00.310 cpu : usr=0.05%, sys=1.59%, ctx=1645, majf=0, minf=32769 00:25:00.310 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:25:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.310 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.310 issued rwts: total=1612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.310 job2: (groupid=0, jobs=1): err= 0: pid=3693064: Mon Jun 10 11:33:27 2024 00:25:00.310 read: IOPS=28, BW=28.1MiB/s (29.4MB/s)(295MiB/10508msec) 00:25:00.310 slat (usec): min=28, max=2070.8k, avg=34019.79, stdev=140700.03 00:25:00.310 clat (msec): min=470, max=6465, avg=3877.90, stdev=1850.53 00:25:00.310 lat (msec): min=539, max=6483, avg=3911.92, stdev=1853.30 00:25:00.310 clat percentiles (msec): 00:25:00.310 | 1.00th=[ 558], 5.00th=[ 584], 10.00th=[ 1150], 20.00th=[ 1754], 00:25:00.310 | 30.00th=[ 2140], 40.00th=[ 3809], 50.00th=[ 4010], 60.00th=[ 5067], 00:25:00.310 | 70.00th=[ 5269], 80.00th=[ 5604], 90.00th=[ 6141], 95.00th=[ 6275], 00:25:00.310 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 6477], 99.95th=[ 6477], 00:25:00.310 | 99.99th=[ 6477] 00:25:00.310 bw ( KiB/s): min=16384, max=65536, per=0.96%, avg=42705.25, stdev=17653.71, samples=8 00:25:00.310 iops : min= 16, max= 64, avg=41.62, stdev=17.22, samples=8 00:25:00.310 lat (msec) : 500=0.34%, 750=5.76%, 1000=2.03%, 2000=19.32%, >=2000=72.54% 00:25:00.310 cpu : usr=0.00%, sys=0.84%, ctx=1086, majf=0, minf=32769 00:25:00.310 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.6% 00:25:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.310 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:00.310 issued rwts: total=295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.310 job2: (groupid=0, jobs=1): err= 0: pid=3693065: Mon Jun 10 11:33:27 2024 00:25:00.310 read: IOPS=34, BW=34.8MiB/s (36.5MB/s)(363MiB/10437msec) 00:25:00.310 slat (usec): min=612, max=1358.8k, avg=28545.12, stdev=100914.39 00:25:00.310 clat (msec): min=73, max=4581, avg=2936.00, stdev=899.35 00:25:00.310 lat (msec): min=1431, max=4802, avg=2964.55, stdev=894.12 00:25:00.310 clat percentiles (msec): 00:25:00.310 | 1.00th=[ 1469], 5.00th=[ 1921], 10.00th=[ 2039], 20.00th=[ 2333], 00:25:00.310 | 30.00th=[ 2366], 40.00th=[ 2400], 50.00th=[ 2467], 60.00th=[ 2802], 00:25:00.310 | 70.00th=[ 3540], 80.00th=[ 3876], 90.00th=[ 4463], 95.00th=[ 4530], 00:25:00.310 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:25:00.310 | 99.99th=[ 4597] 00:25:00.310 bw ( KiB/s): min=28672, max=71680, per=1.20%, avg=53475.56, stdev=14748.60, samples=9 00:25:00.310 iops : min= 28, max= 70, avg=52.22, stdev=14.40, samples=9 00:25:00.310 lat (msec) : 100=0.28%, 2000=7.16%, >=2000=92.56% 00:25:00.310 cpu : usr=0.00%, sys=0.86%, ctx=1371, majf=0, minf=32769 00:25:00.310 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.6% 00:25:00.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.310 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.310 issued rwts: total=363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.310 job2: (groupid=0, jobs=1): err= 0: pid=3693066: Mon Jun 10 11:33:27 2024 00:25:00.310 read: IOPS=44, BW=45.0MiB/s (47.2MB/s)(451MiB/10025msec) 00:25:00.310 slat (usec): min=30, max=741846, avg=22172.86, stdev=41432.10 00:25:00.310 clat (msec): min=22, max=4519, avg=2428.20, stdev=739.85 00:25:00.310 lat (msec): min=28, max=4549, avg=2450.37, stdev=736.70 00:25:00.310 clat percentiles (msec): 00:25:00.310 | 1.00th=[ 39], 5.00th=[ 735], 10.00th=[ 1620], 20.00th=[ 2165], 00:25:00.310 | 30.00th=[ 2366], 40.00th=[ 2467], 50.00th=[ 2601], 60.00th=[ 2702], 00:25:00.310 | 70.00th=[ 2735], 80.00th=[ 2802], 90.00th=[ 3037], 95.00th=[ 3406], 00:25:00.310 | 99.00th=[ 3675], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:25:00.310 | 99.99th=[ 4530] 00:25:00.310 bw ( KiB/s): min=14336, max=69632, per=1.06%, avg=47359.93, stdev=16089.28, samples=14 00:25:00.310 iops : min= 14, max= 68, avg=46.14, stdev=15.67, samples=14 00:25:00.310 lat (msec) : 50=1.77%, 100=1.77%, 250=0.44%, 500=0.22%, 750=1.33% 00:25:00.310 lat (msec) : 1000=2.66%, 2000=5.54%, >=2000=86.25% 00:25:00.310 cpu : usr=0.01%, sys=1.08%, ctx=1790, majf=0, minf=32769 00:25:00.311 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.0% 00:25:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.311 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:00.311 issued rwts: total=451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.311 job2: (groupid=0, jobs=1): err= 0: pid=3693067: Mon Jun 10 11:33:27 2024 00:25:00.311 read: IOPS=38, BW=38.2MiB/s (40.0MB/s)(385MiB/10082msec) 00:25:00.311 slat (usec): min=678, max=2091.2k, avg=25989.33, stdev=137371.31 00:25:00.311 clat (msec): min=72, max=6694, avg=1689.92, stdev=1054.59 00:25:00.311 lat (msec): min=120, max=6703, avg=1715.91, stdev=1084.94 00:25:00.311 clat percentiles (msec): 00:25:00.311 | 1.00th=[ 125], 5.00th=[ 275], 10.00th=[ 527], 20.00th=[ 1099], 00:25:00.311 | 30.00th=[ 1502], 40.00th=[ 1536], 50.00th=[ 1586], 60.00th=[ 1670], 00:25:00.311 | 70.00th=[ 1787], 80.00th=[ 1938], 90.00th=[ 2567], 95.00th=[ 2769], 00:25:00.311 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:25:00.311 | 99.99th=[ 6678] 00:25:00.311 bw ( KiB/s): min=45056, max=100352, per=1.68%, avg=74898.71, stdev=16505.48, samples=7 00:25:00.311 iops : min= 44, max= 98, avg=73.14, stdev=16.12, samples=7 00:25:00.311 lat (msec) : 100=0.26%, 250=4.42%, 500=4.42%, 750=4.42%, 1000=4.68% 00:25:00.311 lat (msec) : 2000=62.86%, >=2000=18.96% 00:25:00.311 cpu : usr=0.03%, sys=1.31%, ctx=855, majf=0, minf=32769 00:25:00.311 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:25:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.311 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.311 issued rwts: total=385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.311 job3: (groupid=0, jobs=1): err= 0: pid=3693071: Mon Jun 10 11:33:27 2024 00:25:00.311 read: IOPS=119, BW=120MiB/s (125MB/s)(1269MiB/10604msec) 00:25:00.311 slat (usec): min=31, max=2061.3k, avg=8312.57, stdev=58857.71 00:25:00.311 clat (msec): min=47, max=3208, avg=1008.99, stdev=643.20 00:25:00.311 lat (msec): min=397, max=3232, avg=1017.30, stdev=644.91 00:25:00.311 clat percentiles (msec): 00:25:00.311 | 1.00th=[ 397], 5.00th=[ 401], 10.00th=[ 405], 20.00th=[ 418], 00:25:00.311 | 30.00th=[ 609], 40.00th=[ 894], 50.00th=[ 969], 60.00th=[ 1036], 00:25:00.311 | 70.00th=[ 1099], 80.00th=[ 1167], 90.00th=[ 2106], 95.00th=[ 2735], 00:25:00.311 | 99.00th=[ 3104], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:25:00.311 | 99.99th=[ 3205] 00:25:00.311 bw ( KiB/s): min=67584, max=319488, per=3.50%, avg=155784.53, stdev=74401.36, samples=15 00:25:00.311 iops : min= 66, max= 312, avg=152.13, stdev=72.66, samples=15 00:25:00.311 lat (msec) : 50=0.08%, 500=26.08%, 750=9.46%, 1000=19.78%, 2000=34.59% 00:25:00.311 lat (msec) : >=2000=10.01% 00:25:00.311 cpu : usr=0.08%, sys=2.02%, ctx=2080, majf=0, minf=32769 00:25:00.311 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:25:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.311 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.311 issued rwts: total=1269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.311 job3: (groupid=0, jobs=1): err= 0: pid=3693072: Mon Jun 10 11:33:27 2024 00:25:00.311 read: IOPS=49, BW=49.5MiB/s (51.9MB/s)(532MiB/10749msec) 00:25:00.311 slat (usec): min=52, max=2063.1k, avg=20113.16, stdev=90039.06 00:25:00.311 clat (msec): min=45, max=3769, avg=2424.22, stdev=563.01 00:25:00.311 lat (msec): min=1493, max=3777, avg=2444.33, stdev=554.66 00:25:00.311 clat percentiles (msec): 00:25:00.311 | 1.00th=[ 1569], 5.00th=[ 1653], 10.00th=[ 1754], 20.00th=[ 1888], 00:25:00.311 | 30.00th=[ 1972], 40.00th=[ 2232], 50.00th=[ 2366], 60.00th=[ 2567], 00:25:00.311 | 70.00th=[ 2802], 80.00th=[ 2903], 90.00th=[ 3138], 95.00th=[ 3507], 00:25:00.311 | 99.00th=[ 3708], 99.50th=[ 3742], 99.90th=[ 3775], 99.95th=[ 3775], 00:25:00.311 | 99.99th=[ 3775] 00:25:00.311 bw ( KiB/s): min= 2048, max=104448, per=1.16%, avg=51701.94, stdev=26055.02, samples=16 00:25:00.311 iops : min= 2, max= 102, avg=50.38, stdev=25.50, samples=16 00:25:00.311 lat (msec) : 50=0.19%, 2000=31.20%, >=2000=68.61% 00:25:00.311 cpu : usr=0.04%, sys=1.66%, ctx=2282, majf=0, minf=32769 00:25:00.311 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:25:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.311 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.311 issued rwts: total=532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.311 job3: (groupid=0, jobs=1): err= 0: pid=3693074: Mon Jun 10 11:33:27 2024 00:25:00.311 read: IOPS=41, BW=41.4MiB/s (43.4MB/s)(415MiB/10019msec) 00:25:00.311 slat (usec): min=31, max=2106.8k, avg=24093.40, stdev=168524.84 00:25:00.311 clat (msec): min=17, max=5795, avg=1137.08, stdev=1044.68 00:25:00.311 lat (msec): min=18, max=7544, avg=1161.17, stdev=1101.15 00:25:00.311 clat percentiles (msec): 00:25:00.311 | 1.00th=[ 24], 5.00th=[ 54], 10.00th=[ 87], 20.00th=[ 262], 00:25:00.311 | 30.00th=[ 827], 40.00th=[ 995], 50.00th=[ 1116], 60.00th=[ 1200], 00:25:00.311 | 70.00th=[ 1284], 80.00th=[ 1435], 90.00th=[ 1620], 95.00th=[ 1754], 00:25:00.311 | 99.00th=[ 5604], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:25:00.311 | 99.99th=[ 5805] 00:25:00.311 bw ( KiB/s): min=14336, max=147456, per=1.70%, avg=75776.00, stdev=47964.32, samples=5 00:25:00.311 iops : min= 14, max= 144, avg=74.00, stdev=46.84, samples=5 00:25:00.311 lat (msec) : 20=0.72%, 50=3.61%, 100=6.99%, 250=7.71%, 500=5.78% 00:25:00.311 lat (msec) : 750=2.65%, 1000=12.53%, 2000=55.18%, >=2000=4.82% 00:25:00.311 cpu : usr=0.01%, sys=0.81%, ctx=1125, majf=0, minf=32769 00:25:00.311 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:25:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.311 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:00.311 issued rwts: total=415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.311 job3: (groupid=0, jobs=1): err= 0: pid=3693075: Mon Jun 10 11:33:27 2024 00:25:00.311 read: IOPS=42, BW=42.0MiB/s (44.1MB/s)(440MiB/10464msec) 00:25:00.311 slat (usec): min=40, max=2136.3k, avg=23628.51, stdev=172164.87 00:25:00.311 clat (msec): min=63, max=7237, avg=2771.54, stdev=2539.67 00:25:00.311 lat (msec): min=821, max=7238, avg=2795.17, stdev=2540.99 00:25:00.311 clat percentiles (msec): 00:25:00.311 | 1.00th=[ 818], 5.00th=[ 827], 10.00th=[ 827], 20.00th=[ 835], 00:25:00.311 | 30.00th=[ 894], 40.00th=[ 1116], 50.00th=[ 1418], 60.00th=[ 1687], 00:25:00.311 | 70.00th=[ 2165], 80.00th=[ 6611], 90.00th=[ 6946], 95.00th=[ 7080], 00:25:00.311 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:25:00.311 | 99.99th=[ 7215] 00:25:00.311 bw ( KiB/s): min= 6144, max=157696, per=1.79%, avg=79948.75, stdev=69228.73, samples=8 00:25:00.311 iops : min= 6, max= 154, avg=78.00, stdev=67.55, samples=8 00:25:00.311 lat (msec) : 100=0.23%, 1000=35.91%, 2000=32.73%, >=2000=31.14% 00:25:00.311 cpu : usr=0.04%, sys=1.23%, ctx=597, majf=0, minf=32769 00:25:00.311 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.7% 00:25:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.311 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:00.311 issued rwts: total=440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.311 job3: (groupid=0, jobs=1): err= 0: pid=3693076: Mon Jun 10 11:33:27 2024 00:25:00.311 read: IOPS=5, BW=5971KiB/s (6115kB/s)(62.0MiB/10632msec) 00:25:00.311 slat (usec): min=651, max=2109.0k, avg=170200.57, stdev=540792.51 00:25:00.311 clat (msec): min=78, max=10594, avg=7019.95, stdev=3199.45 00:25:00.311 lat (msec): min=2124, max=10631, avg=7190.15, stdev=3103.37 00:25:00.311 clat percentiles (msec): 00:25:00.311 | 1.00th=[ 79], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4279], 00:25:00.311 | 30.00th=[ 6275], 40.00th=[ 6342], 50.00th=[ 6477], 60.00th=[ 8658], 00:25:00.311 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:25:00.311 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:25:00.311 | 99.99th=[10537] 00:25:00.311 lat (msec) : 100=1.61%, >=2000=98.39% 00:25:00.311 cpu : usr=0.01%, sys=0.46%, ctx=131, majf=0, minf=15873 00:25:00.311 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:25:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.311 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:00.311 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.311 job3: (groupid=0, jobs=1): err= 0: pid=3693077: Mon Jun 10 11:33:27 2024 00:25:00.311 read: IOPS=73, BW=73.6MiB/s (77.2MB/s)(792MiB/10763msec) 00:25:00.311 slat (usec): min=34, max=2160.7k, avg=13524.31, stdev=77217.59 00:25:00.311 clat (msec): min=45, max=3366, avg=1675.35, stdev=589.07 00:25:00.311 lat (msec): min=959, max=3389, avg=1688.87, stdev=587.21 00:25:00.311 clat percentiles (msec): 00:25:00.311 | 1.00th=[ 1011], 5.00th=[ 1083], 10.00th=[ 1116], 20.00th=[ 1167], 00:25:00.311 | 30.00th=[ 1234], 40.00th=[ 1284], 50.00th=[ 1519], 60.00th=[ 1653], 00:25:00.311 | 70.00th=[ 1955], 80.00th=[ 2022], 90.00th=[ 2567], 95.00th=[ 3004], 00:25:00.311 | 99.00th=[ 3339], 99.50th=[ 3339], 99.90th=[ 3373], 99.95th=[ 3373], 00:25:00.311 | 99.99th=[ 3373] 00:25:00.311 bw ( KiB/s): min= 8192, max=135168, per=1.80%, avg=79983.53, stdev=36259.04, samples=17 00:25:00.311 iops : min= 8, max= 132, avg=78.06, stdev=35.42, samples=17 00:25:00.311 lat (msec) : 50=0.13%, 1000=0.76%, 2000=75.13%, >=2000=23.99% 00:25:00.311 cpu : usr=0.04%, sys=2.19%, ctx=1961, majf=0, minf=32769 00:25:00.311 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.0% 00:25:00.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.311 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.311 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.311 job3: (groupid=0, jobs=1): err= 0: pid=3693078: Mon Jun 10 11:33:27 2024 00:25:00.311 read: IOPS=68, BW=68.7MiB/s (72.0MB/s)(727MiB/10587msec) 00:25:00.311 slat (usec): min=35, max=1870.3k, avg=14490.54, stdev=70022.33 00:25:00.311 clat (msec): min=47, max=3112, avg=1686.89, stdev=508.65 00:25:00.311 lat (msec): min=1093, max=3136, avg=1701.38, stdev=504.16 00:25:00.311 clat percentiles (msec): 00:25:00.311 | 1.00th=[ 1116], 5.00th=[ 1167], 10.00th=[ 1200], 20.00th=[ 1250], 00:25:00.312 | 30.00th=[ 1351], 40.00th=[ 1469], 50.00th=[ 1569], 60.00th=[ 1636], 00:25:00.312 | 70.00th=[ 1821], 80.00th=[ 1888], 90.00th=[ 2601], 95.00th=[ 2836], 00:25:00.312 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:00.312 | 99.99th=[ 3104] 00:25:00.312 bw ( KiB/s): min=12288, max=141029, per=1.84%, avg=81753.67, stdev=34474.04, samples=15 00:25:00.312 iops : min= 12, max= 137, avg=79.73, stdev=33.58, samples=15 00:25:00.312 lat (msec) : 50=0.14%, 2000=82.94%, >=2000=16.92% 00:25:00.312 cpu : usr=0.04%, sys=1.11%, ctx=1795, majf=0, minf=32769 00:25:00.312 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:25:00.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.312 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.312 issued rwts: total=727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.312 job3: (groupid=0, jobs=1): err= 0: pid=3693079: Mon Jun 10 11:33:27 2024 00:25:00.312 read: IOPS=102, BW=102MiB/s (107MB/s)(1035MiB/10117msec) 00:25:00.312 slat (usec): min=30, max=88224, avg=9679.64, stdev=10666.87 00:25:00.312 clat (msec): min=90, max=1832, avg=1198.63, stdev=344.36 00:25:00.312 lat (msec): min=158, max=1839, avg=1208.31, stdev=345.55 00:25:00.312 clat percentiles (msec): 00:25:00.312 | 1.00th=[ 255], 5.00th=[ 542], 10.00th=[ 852], 20.00th=[ 877], 00:25:00.312 | 30.00th=[ 953], 40.00th=[ 1150], 50.00th=[ 1267], 60.00th=[ 1318], 00:25:00.312 | 70.00th=[ 1401], 80.00th=[ 1485], 90.00th=[ 1636], 95.00th=[ 1737], 00:25:00.312 | 99.00th=[ 1821], 99.50th=[ 1821], 99.90th=[ 1821], 99.95th=[ 1838], 00:25:00.312 | 99.99th=[ 1838] 00:25:00.312 bw ( KiB/s): min=51200, max=155648, per=2.32%, avg=103188.00, stdev=28930.15, samples=18 00:25:00.312 iops : min= 50, max= 152, avg=100.72, stdev=28.30, samples=18 00:25:00.312 lat (msec) : 100=0.10%, 250=0.77%, 500=3.86%, 750=1.84%, 1000=24.54% 00:25:00.312 lat (msec) : 2000=68.89% 00:25:00.312 cpu : usr=0.07%, sys=2.69%, ctx=1882, majf=0, minf=32769 00:25:00.312 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:25:00.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.312 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.312 issued rwts: total=1035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.312 job3: (groupid=0, jobs=1): err= 0: pid=3693080: Mon Jun 10 11:33:27 2024 00:25:00.312 read: IOPS=26, BW=26.2MiB/s (27.5MB/s)(282MiB/10746msec) 00:25:00.312 slat (usec): min=29, max=2074.6k, avg=35497.87, stdev=235690.01 00:25:00.312 clat (msec): min=733, max=9025, avg=4188.21, stdev=3514.25 00:25:00.312 lat (msec): min=768, max=9031, avg=4223.71, stdev=3521.23 00:25:00.312 clat percentiles (msec): 00:25:00.312 | 1.00th=[ 768], 5.00th=[ 885], 10.00th=[ 995], 20.00th=[ 1062], 00:25:00.312 | 30.00th=[ 1234], 40.00th=[ 1452], 50.00th=[ 1687], 60.00th=[ 5269], 00:25:00.312 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 8926], 00:25:00.312 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:00.312 | 99.99th=[ 9060] 00:25:00.312 bw ( KiB/s): min= 1914, max=129024, per=1.78%, avg=79326.50, stdev=55380.55, samples=4 00:25:00.312 iops : min= 1, max= 126, avg=77.25, stdev=54.49, samples=4 00:25:00.312 lat (msec) : 750=0.35%, 1000=9.93%, 2000=44.33%, >=2000=45.39% 00:25:00.312 cpu : usr=0.00%, sys=1.57%, ctx=343, majf=0, minf=32769 00:25:00.312 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:25:00.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.312 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:00.312 issued rwts: total=282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.312 job3: (groupid=0, jobs=1): err= 0: pid=3693081: Mon Jun 10 11:33:27 2024 00:25:00.312 read: IOPS=114, BW=114MiB/s (120MB/s)(1209MiB/10575msec) 00:25:00.312 slat (usec): min=23, max=2060.7k, avg=8702.62, stdev=60452.08 00:25:00.312 clat (msec): min=47, max=3207, avg=1048.84, stdev=681.60 00:25:00.312 lat (msec): min=509, max=3210, avg=1057.55, stdev=682.82 00:25:00.312 clat percentiles (msec): 00:25:00.312 | 1.00th=[ 510], 5.00th=[ 510], 10.00th=[ 514], 20.00th=[ 575], 00:25:00.312 | 30.00th=[ 617], 40.00th=[ 651], 50.00th=[ 785], 60.00th=[ 986], 00:25:00.312 | 70.00th=[ 1099], 80.00th=[ 1284], 90.00th=[ 2232], 95.00th=[ 2970], 00:25:00.312 | 99.00th=[ 3138], 99.50th=[ 3171], 99.90th=[ 3171], 99.95th=[ 3205], 00:25:00.312 | 99.99th=[ 3205] 00:25:00.312 bw ( KiB/s): min=61440, max=253952, per=3.31%, avg=147592.53, stdev=65807.29, samples=15 00:25:00.312 iops : min= 60, max= 248, avg=144.13, stdev=64.26, samples=15 00:25:00.312 lat (msec) : 50=0.08%, 750=48.47%, 1000=11.83%, 2000=29.11%, >=2000=10.50% 00:25:00.312 cpu : usr=0.09%, sys=1.75%, ctx=2014, majf=0, minf=32769 00:25:00.312 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:25:00.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.312 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.312 issued rwts: total=1209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.312 job3: (groupid=0, jobs=1): err= 0: pid=3693082: Mon Jun 10 11:33:27 2024 00:25:00.312 read: IOPS=60, BW=60.1MiB/s (63.1MB/s)(640MiB/10643msec) 00:25:00.312 slat (usec): min=40, max=2130.4k, avg=16547.26, stdev=84739.40 00:25:00.312 clat (msec): min=49, max=3693, avg=1941.31, stdev=691.84 00:25:00.312 lat (msec): min=1107, max=3694, avg=1957.86, stdev=688.80 00:25:00.312 clat percentiles (msec): 00:25:00.312 | 1.00th=[ 1116], 5.00th=[ 1150], 10.00th=[ 1183], 20.00th=[ 1267], 00:25:00.312 | 30.00th=[ 1385], 40.00th=[ 1536], 50.00th=[ 1838], 60.00th=[ 2089], 00:25:00.312 | 70.00th=[ 2366], 80.00th=[ 2500], 90.00th=[ 2937], 95.00th=[ 3306], 00:25:00.312 | 99.00th=[ 3641], 99.50th=[ 3675], 99.90th=[ 3708], 99.95th=[ 3708], 00:25:00.312 | 99.99th=[ 3708] 00:25:00.312 bw ( KiB/s): min=18432, max=112640, per=1.68%, avg=74878.07, stdev=30614.71, samples=14 00:25:00.312 iops : min= 18, max= 110, avg=73.00, stdev=29.93, samples=14 00:25:00.312 lat (msec) : 50=0.16%, 2000=58.75%, >=2000=41.09% 00:25:00.312 cpu : usr=0.01%, sys=1.33%, ctx=2250, majf=0, minf=32769 00:25:00.312 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:25:00.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.312 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.312 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.312 job3: (groupid=0, jobs=1): err= 0: pid=3693083: Mon Jun 10 11:33:27 2024 00:25:00.312 read: IOPS=57, BW=57.6MiB/s (60.4MB/s)(577MiB/10019msec) 00:25:00.312 slat (usec): min=28, max=2112.4k, avg=17339.88, stdev=113719.71 00:25:00.312 clat (msec): min=9, max=4866, avg=1337.77, stdev=808.41 00:25:00.312 lat (msec): min=41, max=4932, avg=1355.11, stdev=821.83 00:25:00.312 clat percentiles (msec): 00:25:00.312 | 1.00th=[ 55], 5.00th=[ 220], 10.00th=[ 676], 20.00th=[ 936], 00:25:00.312 | 30.00th=[ 1011], 40.00th=[ 1045], 50.00th=[ 1062], 60.00th=[ 1083], 00:25:00.312 | 70.00th=[ 1318], 80.00th=[ 1921], 90.00th=[ 2567], 95.00th=[ 2601], 00:25:00.312 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:25:00.312 | 99.99th=[ 4866] 00:25:00.312 bw ( KiB/s): min= 6144, max=151552, per=1.84%, avg=82124.80, stdev=53356.85, samples=10 00:25:00.312 iops : min= 6, max= 148, avg=80.20, stdev=52.11, samples=10 00:25:00.312 lat (msec) : 10=0.17%, 50=0.52%, 100=1.39%, 250=3.47%, 500=2.95% 00:25:00.312 lat (msec) : 750=1.56%, 1000=16.81%, 2000=54.07%, >=2000=19.06% 00:25:00.312 cpu : usr=0.04%, sys=1.06%, ctx=835, majf=0, minf=32769 00:25:00.312 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:25:00.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.312 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.312 issued rwts: total=577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.312 job3: (groupid=0, jobs=1): err= 0: pid=3693085: Mon Jun 10 11:33:27 2024 00:25:00.312 read: IOPS=7, BW=7894KiB/s (8084kB/s)(83.0MiB/10766msec) 00:25:00.312 slat (usec): min=646, max=2112.9k, avg=128915.59, stdev=492852.07 00:25:00.312 clat (msec): min=64, max=10764, avg=9415.99, stdev=2595.70 00:25:00.312 lat (msec): min=2114, max=10764, avg=9544.91, stdev=2382.56 00:25:00.312 clat percentiles (msec): 00:25:00.312 | 1.00th=[ 65], 5.00th=[ 2165], 10.00th=[ 6409], 20.00th=[ 8557], 00:25:00.312 | 30.00th=[10537], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:25:00.312 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:25:00.312 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:25:00.312 | 99.99th=[10805] 00:25:00.312 lat (msec) : 100=1.20%, >=2000=98.80% 00:25:00.312 cpu : usr=0.00%, sys=0.93%, ctx=98, majf=0, minf=21249 00:25:00.312 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:25:00.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.312 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:25:00.312 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.312 job4: (groupid=0, jobs=1): err= 0: pid=3693091: Mon Jun 10 11:33:27 2024 00:25:00.312 read: IOPS=38, BW=38.4MiB/s (40.2MB/s)(408MiB/10630msec) 00:25:00.312 slat (usec): min=25, max=2155.5k, avg=24522.16, stdev=184111.44 00:25:00.312 clat (msec): min=623, max=8442, avg=2824.72, stdev=2585.79 00:25:00.312 lat (msec): min=695, max=8448, avg=2849.24, stdev=2601.05 00:25:00.312 clat percentiles (msec): 00:25:00.312 | 1.00th=[ 701], 5.00th=[ 718], 10.00th=[ 726], 20.00th=[ 726], 00:25:00.312 | 30.00th=[ 802], 40.00th=[ 894], 50.00th=[ 1167], 60.00th=[ 1435], 00:25:00.312 | 70.00th=[ 5000], 80.00th=[ 6342], 90.00th=[ 6409], 95.00th=[ 6477], 00:25:00.312 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:25:00.312 | 99.99th=[ 8423] 00:25:00.312 bw ( KiB/s): min= 4096, max=186368, per=2.13%, avg=94916.83, stdev=80595.64, samples=6 00:25:00.312 iops : min= 4, max= 182, avg=92.67, stdev=78.69, samples=6 00:25:00.312 lat (msec) : 750=26.23%, 1000=17.89%, 2000=15.93%, >=2000=39.95% 00:25:00.312 cpu : usr=0.00%, sys=1.19%, ctx=582, majf=0, minf=32769 00:25:00.312 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.8%, >=64=84.6% 00:25:00.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.312 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.312 issued rwts: total=408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.313 job4: (groupid=0, jobs=1): err= 0: pid=3693092: Mon Jun 10 11:33:27 2024 00:25:00.313 read: IOPS=228, BW=228MiB/s (239MB/s)(2283MiB/10010msec) 00:25:00.313 slat (usec): min=24, max=96809, avg=4375.61, stdev=7754.08 00:25:00.313 clat (msec): min=9, max=1559, avg=522.19, stdev=311.69 00:25:00.313 lat (msec): min=9, max=1562, avg=526.56, stdev=313.97 00:25:00.313 clat percentiles (msec): 00:25:00.313 | 1.00th=[ 28], 5.00th=[ 230], 10.00th=[ 321], 20.00th=[ 326], 00:25:00.313 | 30.00th=[ 330], 40.00th=[ 422], 50.00th=[ 430], 60.00th=[ 447], 00:25:00.313 | 70.00th=[ 542], 80.00th=[ 651], 90.00th=[ 995], 95.00th=[ 1368], 00:25:00.313 | 99.00th=[ 1502], 99.50th=[ 1519], 99.90th=[ 1552], 99.95th=[ 1552], 00:25:00.313 | 99.99th=[ 1552] 00:25:00.313 bw ( KiB/s): min=22528, max=405504, per=5.34%, avg=237808.94, stdev=118303.87, samples=17 00:25:00.313 iops : min= 22, max= 396, avg=232.24, stdev=115.53, samples=17 00:25:00.313 lat (msec) : 10=0.09%, 20=0.57%, 50=1.05%, 100=1.14%, 250=2.50% 00:25:00.313 lat (msec) : 500=60.40%, 750=18.92%, 1000=5.43%, 2000=9.90% 00:25:00.313 cpu : usr=0.16%, sys=2.57%, ctx=2791, majf=0, minf=32769 00:25:00.313 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:25:00.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.313 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.313 job4: (groupid=0, jobs=1): err= 0: pid=3693093: Mon Jun 10 11:33:27 2024 00:25:00.313 read: IOPS=1, BW=1765KiB/s (1807kB/s)(18.0MiB/10445msec) 00:25:00.313 slat (msec): min=6, max=2102, avg=575.83, stdev=921.34 00:25:00.313 clat (msec): min=79, max=10419, avg=6048.97, stdev=2962.37 00:25:00.313 lat (msec): min=2125, max=10444, avg=6624.80, stdev=2732.24 00:25:00.313 clat percentiles (msec): 00:25:00.313 | 1.00th=[ 81], 5.00th=[ 81], 10.00th=[ 2123], 20.00th=[ 4279], 00:25:00.313 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 6477], 00:25:00.313 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[10402], 95.00th=[10402], 00:25:00.313 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:00.313 | 99.99th=[10402] 00:25:00.313 lat (msec) : 100=5.56%, >=2000=94.44% 00:25:00.313 cpu : usr=0.00%, sys=0.08%, ctx=76, majf=0, minf=4609 00:25:00.313 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:25:00.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.313 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:00.313 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.313 job4: (groupid=0, jobs=1): err= 0: pid=3693094: Mon Jun 10 11:33:27 2024 00:25:00.313 read: IOPS=35, BW=35.7MiB/s (37.4MB/s)(377MiB/10568msec) 00:25:00.313 slat (usec): min=43, max=2089.7k, avg=27811.79, stdev=210658.62 00:25:00.313 clat (msec): min=79, max=9070, avg=3437.06, stdev=3473.38 00:25:00.313 lat (msec): min=742, max=9073, avg=3464.87, stdev=3478.76 00:25:00.313 clat percentiles (msec): 00:25:00.313 | 1.00th=[ 743], 5.00th=[ 743], 10.00th=[ 751], 20.00th=[ 751], 00:25:00.313 | 30.00th=[ 760], 40.00th=[ 760], 50.00th=[ 793], 60.00th=[ 2802], 00:25:00.313 | 70.00th=[ 6409], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 8926], 00:25:00.313 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:25:00.313 | 99.99th=[ 9060] 00:25:00.313 bw ( KiB/s): min= 8192, max=167936, per=1.64%, avg=72824.57, stdev=70033.61, samples=7 00:25:00.313 iops : min= 8, max= 164, avg=71.00, stdev=68.36, samples=7 00:25:00.313 lat (msec) : 100=0.27%, 750=15.12%, 1000=42.44%, >=2000=42.18% 00:25:00.313 cpu : usr=0.00%, sys=1.32%, ctx=360, majf=0, minf=32769 00:25:00.313 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.5%, >=64=83.3% 00:25:00.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.313 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.313 issued rwts: total=377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.313 job4: (groupid=0, jobs=1): err= 0: pid=3693095: Mon Jun 10 11:33:27 2024 00:25:00.313 read: IOPS=135, BW=136MiB/s (142MB/s)(1368MiB/10067msec) 00:25:00.313 slat (usec): min=31, max=2124.8k, avg=7313.59, stdev=57997.68 00:25:00.313 clat (msec): min=52, max=2867, avg=899.15, stdev=618.23 00:25:00.313 lat (msec): min=118, max=2868, avg=906.47, stdev=620.31 00:25:00.313 clat percentiles (msec): 00:25:00.313 | 1.00th=[ 140], 5.00th=[ 498], 10.00th=[ 535], 20.00th=[ 609], 00:25:00.313 | 30.00th=[ 634], 40.00th=[ 726], 50.00th=[ 793], 60.00th=[ 827], 00:25:00.313 | 70.00th=[ 835], 80.00th=[ 844], 90.00th=[ 860], 95.00th=[ 2769], 00:25:00.313 | 99.00th=[ 2836], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:25:00.313 | 99.99th=[ 2869] 00:25:00.313 bw ( KiB/s): min=32768, max=237568, per=3.55%, avg=157971.25, stdev=51628.75, samples=16 00:25:00.313 iops : min= 32, max= 232, avg=154.19, stdev=50.40, samples=16 00:25:00.313 lat (msec) : 100=0.07%, 250=2.27%, 500=2.85%, 750=41.89%, 1000=43.64% 00:25:00.313 lat (msec) : >=2000=9.28% 00:25:00.313 cpu : usr=0.10%, sys=2.28%, ctx=1319, majf=0, minf=32769 00:25:00.313 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:25:00.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.313 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.313 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.313 job4: (groupid=0, jobs=1): err= 0: pid=3693096: Mon Jun 10 11:33:27 2024 00:25:00.313 read: IOPS=77, BW=77.4MiB/s (81.1MB/s)(828MiB/10704msec) 00:25:00.313 slat (usec): min=34, max=4189.1k, avg=12837.81, stdev=161893.99 00:25:00.313 clat (msec): min=70, max=6894, avg=1577.53, stdev=2122.10 00:25:00.313 lat (msec): min=481, max=6899, avg=1590.37, stdev=2127.72 00:25:00.313 clat percentiles (msec): 00:25:00.313 | 1.00th=[ 518], 5.00th=[ 535], 10.00th=[ 542], 20.00th=[ 542], 00:25:00.313 | 30.00th=[ 542], 40.00th=[ 567], 50.00th=[ 684], 60.00th=[ 835], 00:25:00.313 | 70.00th=[ 844], 80.00th=[ 869], 90.00th=[ 6544], 95.00th=[ 6745], 00:25:00.313 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:25:00.313 | 99.99th=[ 6879] 00:25:00.313 bw ( KiB/s): min= 2048, max=241664, per=3.22%, avg=143360.00, stdev=90961.33, samples=10 00:25:00.313 iops : min= 2, max= 236, avg=140.00, stdev=88.83, samples=10 00:25:00.313 lat (msec) : 100=0.12%, 500=0.85%, 750=52.78%, 1000=29.71%, >=2000=16.55% 00:25:00.313 cpu : usr=0.01%, sys=1.96%, ctx=808, majf=0, minf=32769 00:25:00.313 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:25:00.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.313 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.313 issued rwts: total=828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.313 job4: (groupid=0, jobs=1): err= 0: pid=3693097: Mon Jun 10 11:33:27 2024 00:25:00.313 read: IOPS=15, BW=15.1MiB/s (15.8MB/s)(161MiB/10694msec) 00:25:00.313 slat (usec): min=121, max=2236.3k, avg=65924.19, stdev=329380.95 00:25:00.313 clat (msec): min=78, max=10549, avg=6016.48, stdev=2400.73 00:25:00.313 lat (msec): min=2124, max=10554, avg=6082.40, stdev=2381.07 00:25:00.313 clat percentiles (msec): 00:25:00.313 | 1.00th=[ 2123], 5.00th=[ 3775], 10.00th=[ 3775], 20.00th=[ 3910], 00:25:00.313 | 30.00th=[ 4010], 40.00th=[ 4144], 50.00th=[ 4279], 60.00th=[ 6812], 00:25:00.313 | 70.00th=[ 6812], 80.00th=[ 8423], 90.00th=[10268], 95.00th=[10537], 00:25:00.313 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:25:00.313 | 99.99th=[10537] 00:25:00.313 bw ( KiB/s): min= 2048, max=61440, per=0.51%, avg=22528.00, stdev=33714.33, samples=3 00:25:00.313 iops : min= 2, max= 60, avg=22.00, stdev=32.92, samples=3 00:25:00.313 lat (msec) : 100=0.62%, >=2000=99.38% 00:25:00.313 cpu : usr=0.01%, sys=1.12%, ctx=222, majf=0, minf=32769 00:25:00.313 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=5.0%, 16=9.9%, 32=19.9%, >=64=60.9% 00:25:00.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.313 complete : 0=0.0%, 4=97.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.9% 00:25:00.313 issued rwts: total=161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.313 job4: (groupid=0, jobs=1): err= 0: pid=3693098: Mon Jun 10 11:33:27 2024 00:25:00.313 read: IOPS=71, BW=71.5MiB/s (74.9MB/s)(754MiB/10552msec) 00:25:00.313 slat (usec): min=30, max=4202.7k, avg=13307.89, stdev=165764.86 00:25:00.313 clat (msec): min=398, max=5234, avg=1455.92, stdev=1641.08 00:25:00.313 lat (msec): min=398, max=6312, avg=1469.22, stdev=1650.67 00:25:00.313 clat percentiles (msec): 00:25:00.313 | 1.00th=[ 401], 5.00th=[ 401], 10.00th=[ 401], 20.00th=[ 405], 00:25:00.313 | 30.00th=[ 430], 40.00th=[ 498], 50.00th=[ 693], 60.00th=[ 919], 00:25:00.313 | 70.00th=[ 1020], 80.00th=[ 2702], 90.00th=[ 4866], 95.00th=[ 5067], 00:25:00.313 | 99.00th=[ 5134], 99.50th=[ 5201], 99.90th=[ 5269], 99.95th=[ 5269], 00:25:00.313 | 99.99th=[ 5269] 00:25:00.313 bw ( KiB/s): min=32768, max=325632, per=3.58%, avg=159463.12, stdev=106205.68, samples=8 00:25:00.313 iops : min= 32, max= 318, avg=155.62, stdev=103.76, samples=8 00:25:00.313 lat (msec) : 500=40.45%, 750=13.00%, 1000=14.99%, 2000=10.74%, >=2000=20.82% 00:25:00.313 cpu : usr=0.06%, sys=1.18%, ctx=905, majf=0, minf=32769 00:25:00.313 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.6% 00:25:00.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.313 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.313 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.313 job4: (groupid=0, jobs=1): err= 0: pid=3693099: Mon Jun 10 11:33:27 2024 00:25:00.313 read: IOPS=113, BW=113MiB/s (119MB/s)(1198MiB/10583msec) 00:25:00.313 slat (usec): min=21, max=2111.7k, avg=8344.31, stdev=100499.47 00:25:00.313 clat (msec): min=198, max=4713, avg=864.40, stdev=1206.83 00:25:00.313 lat (msec): min=199, max=4716, avg=872.74, stdev=1216.68 00:25:00.313 clat percentiles (msec): 00:25:00.313 | 1.00th=[ 201], 5.00th=[ 201], 10.00th=[ 203], 20.00th=[ 209], 00:25:00.313 | 30.00th=[ 222], 40.00th=[ 249], 50.00th=[ 284], 60.00th=[ 558], 00:25:00.313 | 70.00th=[ 609], 80.00th=[ 726], 90.00th=[ 3071], 95.00th=[ 3239], 00:25:00.313 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:25:00.313 | 99.99th=[ 4732] 00:25:00.313 bw ( KiB/s): min= 4096, max=603642, per=6.11%, avg=272063.25, stdev=244100.05, samples=8 00:25:00.313 iops : min= 4, max= 589, avg=265.62, stdev=238.28, samples=8 00:25:00.314 lat (msec) : 250=41.15%, 500=16.11%, 750=24.12%, 1000=2.42%, 2000=0.17% 00:25:00.314 lat (msec) : >=2000=16.03% 00:25:00.314 cpu : usr=0.05%, sys=1.84%, ctx=1698, majf=0, minf=32769 00:25:00.314 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:25:00.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.314 issued rwts: total=1198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.314 job4: (groupid=0, jobs=1): err= 0: pid=3693100: Mon Jun 10 11:33:27 2024 00:25:00.314 read: IOPS=4, BW=4911KiB/s (5029kB/s)(51.0MiB/10634msec) 00:25:00.314 slat (usec): min=283, max=2101.2k, avg=206928.06, stdev=603785.78 00:25:00.314 clat (msec): min=79, max=10632, avg=7739.93, stdev=3149.88 00:25:00.314 lat (msec): min=2116, max=10633, avg=7946.86, stdev=2978.58 00:25:00.314 clat percentiles (msec): 00:25:00.314 | 1.00th=[ 81], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 4279], 00:25:00.314 | 30.00th=[ 4329], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10402], 00:25:00.314 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:25:00.314 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:00.314 | 99.99th=[10671] 00:25:00.314 lat (msec) : 100=1.96%, >=2000=98.04% 00:25:00.314 cpu : usr=0.00%, sys=0.46%, ctx=101, majf=0, minf=13057 00:25:00.314 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:25:00.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.314 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:00.314 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.314 job4: (groupid=0, jobs=1): err= 0: pid=3693101: Mon Jun 10 11:33:27 2024 00:25:00.314 read: IOPS=87, BW=87.8MiB/s (92.1MB/s)(922MiB/10498msec) 00:25:00.314 slat (usec): min=23, max=2076.0k, avg=11298.37, stdev=128782.39 00:25:00.314 clat (msec): min=77, max=8576, avg=890.32, stdev=1384.53 00:25:00.314 lat (msec): min=201, max=8586, avg=901.62, stdev=1393.45 00:25:00.314 clat percentiles (msec): 00:25:00.314 | 1.00th=[ 201], 5.00th=[ 203], 10.00th=[ 203], 20.00th=[ 220], 00:25:00.314 | 30.00th=[ 300], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 338], 00:25:00.314 | 70.00th=[ 384], 80.00th=[ 472], 90.00th=[ 4178], 95.00th=[ 4245], 00:25:00.314 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 8557], 99.95th=[ 8557], 00:25:00.314 | 99.99th=[ 8557] 00:25:00.314 bw ( KiB/s): min=12288, max=559104, per=7.30%, avg=325222.40, stdev=198874.40, samples=5 00:25:00.314 iops : min= 12, max= 546, avg=317.60, stdev=194.21, samples=5 00:25:00.314 lat (msec) : 100=0.11%, 250=23.64%, 500=58.68%, 750=1.95%, >=2000=15.62% 00:25:00.314 cpu : usr=0.01%, sys=0.86%, ctx=1250, majf=0, minf=32769 00:25:00.314 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:25:00.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.314 issued rwts: total=922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.314 job4: (groupid=0, jobs=1): err= 0: pid=3693102: Mon Jun 10 11:33:27 2024 00:25:00.314 read: IOPS=27, BW=27.8MiB/s (29.1MB/s)(292MiB/10506msec) 00:25:00.314 slat (usec): min=347, max=2072.8k, avg=35731.20, stdev=215559.94 00:25:00.314 clat (msec): min=70, max=10391, avg=4231.98, stdev=3079.31 00:25:00.314 lat (msec): min=1116, max=10484, avg=4267.71, stdev=3075.27 00:25:00.314 clat percentiles (msec): 00:25:00.314 | 1.00th=[ 1116], 5.00th=[ 1150], 10.00th=[ 1217], 20.00th=[ 1318], 00:25:00.314 | 30.00th=[ 1452], 40.00th=[ 1502], 50.00th=[ 2433], 60.00th=[ 6275], 00:25:00.314 | 70.00th=[ 7819], 80.00th=[ 8020], 90.00th=[ 8288], 95.00th=[ 8423], 00:25:00.314 | 99.00th=[ 8490], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:25:00.314 | 99.99th=[10402] 00:25:00.314 bw ( KiB/s): min= 6144, max=129024, per=1.26%, avg=55978.67, stdev=46135.34, samples=6 00:25:00.314 iops : min= 6, max= 126, avg=54.67, stdev=45.05, samples=6 00:25:00.314 lat (msec) : 100=0.34%, 2000=43.49%, >=2000=56.16% 00:25:00.314 cpu : usr=0.02%, sys=1.05%, ctx=550, majf=0, minf=32769 00:25:00.314 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:25:00.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.314 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:00.314 issued rwts: total=292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.314 job4: (groupid=0, jobs=1): err= 0: pid=3693103: Mon Jun 10 11:33:27 2024 00:25:00.314 read: IOPS=43, BW=43.3MiB/s (45.4MB/s)(435MiB/10040msec) 00:25:00.314 slat (usec): min=31, max=2213.9k, avg=22988.37, stdev=194225.36 00:25:00.314 clat (msec): min=38, max=8925, avg=1888.28, stdev=3167.69 00:25:00.314 lat (msec): min=43, max=8929, avg=1911.26, stdev=3184.57 00:25:00.314 clat percentiles (msec): 00:25:00.314 | 1.00th=[ 54], 5.00th=[ 142], 10.00th=[ 241], 20.00th=[ 380], 00:25:00.314 | 30.00th=[ 418], 40.00th=[ 439], 50.00th=[ 447], 60.00th=[ 460], 00:25:00.314 | 70.00th=[ 485], 80.00th=[ 514], 90.00th=[ 8926], 95.00th=[ 8926], 00:25:00.314 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:25:00.314 | 99.99th=[ 8926] 00:25:00.314 bw ( KiB/s): min=55296, max=321536, per=4.72%, avg=210261.33, stdev=138392.88, samples=3 00:25:00.314 iops : min= 54, max= 314, avg=205.33, stdev=135.15, samples=3 00:25:00.314 lat (msec) : 50=0.92%, 100=2.07%, 250=7.82%, 500=66.21%, 750=4.14% 00:25:00.314 lat (msec) : >=2000=18.85% 00:25:00.314 cpu : usr=0.01%, sys=1.27%, ctx=683, majf=0, minf=32769 00:25:00.314 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:25:00.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.314 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:00.314 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.314 job5: (groupid=0, jobs=1): err= 0: pid=3693106: Mon Jun 10 11:33:27 2024 00:25:00.314 read: IOPS=185, BW=186MiB/s (195MB/s)(1868MiB/10050msec) 00:25:00.314 slat (usec): min=22, max=121443, avg=5370.23, stdev=16215.18 00:25:00.314 clat (msec): min=10, max=3116, avg=660.27, stdev=709.39 00:25:00.314 lat (msec): min=112, max=3154, avg=665.64, stdev=712.71 00:25:00.314 clat percentiles (msec): 00:25:00.314 | 1.00th=[ 201], 5.00th=[ 205], 10.00th=[ 205], 20.00th=[ 207], 00:25:00.314 | 30.00th=[ 209], 40.00th=[ 215], 50.00th=[ 351], 60.00th=[ 531], 00:25:00.314 | 70.00th=[ 651], 80.00th=[ 776], 90.00th=[ 1989], 95.00th=[ 2366], 00:25:00.314 | 99.00th=[ 3037], 99.50th=[ 3071], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:00.314 | 99.99th=[ 3104] 00:25:00.314 bw ( KiB/s): min=26624, max=624640, per=4.40%, avg=195906.22, stdev=206673.15, samples=18 00:25:00.314 iops : min= 26, max= 610, avg=191.28, stdev=201.83, samples=18 00:25:00.314 lat (msec) : 20=0.05%, 250=44.65%, 500=13.87%, 750=18.79%, 1000=5.09% 00:25:00.314 lat (msec) : 2000=8.08%, >=2000=9.48% 00:25:00.314 cpu : usr=0.08%, sys=1.89%, ctx=2785, majf=0, minf=32769 00:25:00.314 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:25:00.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.314 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.314 job5: (groupid=0, jobs=1): err= 0: pid=3693107: Mon Jun 10 11:33:27 2024 00:25:00.314 read: IOPS=100, BW=100MiB/s (105MB/s)(1006MiB/10012msec) 00:25:00.314 slat (usec): min=22, max=2162.1k, avg=9938.77, stdev=130186.75 00:25:00.314 clat (msec): min=10, max=6678, avg=253.14, stdev=539.18 00:25:00.314 lat (msec): min=12, max=8576, avg=263.08, stdev=599.84 00:25:00.314 clat percentiles (msec): 00:25:00.314 | 1.00th=[ 19], 5.00th=[ 52], 10.00th=[ 94], 20.00th=[ 131], 00:25:00.314 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 215], 00:25:00.314 | 70.00th=[ 228], 80.00th=[ 255], 90.00th=[ 292], 95.00th=[ 300], 00:25:00.314 | 99.00th=[ 2400], 99.50th=[ 4530], 99.90th=[ 6678], 99.95th=[ 6678], 00:25:00.314 | 99.99th=[ 6678] 00:25:00.314 bw ( KiB/s): min=503808, max=505856, per=11.33%, avg=504832.00, stdev=1448.15, samples=2 00:25:00.314 iops : min= 492, max= 494, avg=493.00, stdev= 1.41, samples=2 00:25:00.314 lat (msec) : 20=1.19%, 50=3.68%, 100=5.86%, 250=67.30%, 500=20.68% 00:25:00.314 lat (msec) : >=2000=1.29% 00:25:00.314 cpu : usr=0.00%, sys=0.94%, ctx=987, majf=0, minf=32769 00:25:00.314 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:25:00.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.314 issued rwts: total=1006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.314 job5: (groupid=0, jobs=1): err= 0: pid=3693108: Mon Jun 10 11:33:27 2024 00:25:00.314 read: IOPS=143, BW=143MiB/s (150MB/s)(1440MiB/10041msec) 00:25:00.314 slat (usec): min=28, max=2173.0k, avg=6939.15, stdev=58584.75 00:25:00.314 clat (msec): min=39, max=2999, avg=837.02, stdev=643.81 00:25:00.314 lat (msec): min=40, max=3000, avg=843.96, stdev=646.34 00:25:00.314 clat percentiles (msec): 00:25:00.314 | 1.00th=[ 100], 5.00th=[ 405], 10.00th=[ 422], 20.00th=[ 502], 00:25:00.314 | 30.00th=[ 518], 40.00th=[ 567], 50.00th=[ 701], 60.00th=[ 735], 00:25:00.314 | 70.00th=[ 810], 80.00th=[ 869], 90.00th=[ 1116], 95.00th=[ 2769], 00:25:00.315 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 3004], 99.95th=[ 3004], 00:25:00.315 | 99.99th=[ 3004] 00:25:00.315 bw ( KiB/s): min=36864, max=290816, per=4.02%, avg=179247.27, stdev=66558.86, samples=15 00:25:00.315 iops : min= 36, max= 284, avg=175.00, stdev=65.02, samples=15 00:25:00.315 lat (msec) : 50=0.56%, 100=0.49%, 250=1.81%, 500=17.01%, 750=43.68% 00:25:00.315 lat (msec) : 1000=22.15%, 2000=5.49%, >=2000=8.82% 00:25:00.315 cpu : usr=0.05%, sys=2.09%, ctx=1532, majf=0, minf=32769 00:25:00.315 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:25:00.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.315 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.315 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.315 job5: (groupid=0, jobs=1): err= 0: pid=3693109: Mon Jun 10 11:33:27 2024 00:25:00.315 read: IOPS=37, BW=37.1MiB/s (38.9MB/s)(375MiB/10116msec) 00:25:00.315 slat (usec): min=604, max=2075.2k, avg=26667.44, stdev=168666.70 00:25:00.315 clat (msec): min=114, max=5093, avg=2029.50, stdev=1499.10 00:25:00.315 lat (msec): min=226, max=5101, avg=2056.17, stdev=1505.30 00:25:00.315 clat percentiles (msec): 00:25:00.315 | 1.00th=[ 241], 5.00th=[ 793], 10.00th=[ 818], 20.00th=[ 835], 00:25:00.315 | 30.00th=[ 844], 40.00th=[ 869], 50.00th=[ 911], 60.00th=[ 1385], 00:25:00.315 | 70.00th=[ 3507], 80.00th=[ 3910], 90.00th=[ 4077], 95.00th=[ 4178], 00:25:00.315 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:25:00.315 | 99.99th=[ 5067] 00:25:00.315 bw ( KiB/s): min= 2048, max=141312, per=1.43%, avg=63488.00, stdev=60615.26, samples=8 00:25:00.315 iops : min= 2, max= 138, avg=62.00, stdev=59.19, samples=8 00:25:00.315 lat (msec) : 250=1.07%, 500=1.60%, 750=1.33%, 1000=49.87%, 2000=7.20% 00:25:00.315 lat (msec) : >=2000=38.93% 00:25:00.315 cpu : usr=0.00%, sys=0.94%, ctx=929, majf=0, minf=32769 00:25:00.315 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:25:00.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.315 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:00.315 issued rwts: total=375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.315 job5: (groupid=0, jobs=1): err= 0: pid=3693110: Mon Jun 10 11:33:27 2024 00:25:00.315 read: IOPS=111, BW=111MiB/s (117MB/s)(1121MiB/10058msec) 00:25:00.315 slat (usec): min=29, max=2077.0k, avg=8931.54, stdev=62831.59 00:25:00.315 clat (msec): min=39, max=3344, avg=1092.59, stdev=827.63 00:25:00.315 lat (msec): min=69, max=3379, avg=1101.52, stdev=831.12 00:25:00.315 clat percentiles (msec): 00:25:00.315 | 1.00th=[ 82], 5.00th=[ 275], 10.00th=[ 510], 20.00th=[ 542], 00:25:00.315 | 30.00th=[ 550], 40.00th=[ 575], 50.00th=[ 634], 60.00th=[ 1083], 00:25:00.315 | 70.00th=[ 1250], 80.00th=[ 1703], 90.00th=[ 2735], 95.00th=[ 3071], 00:25:00.315 | 99.00th=[ 3272], 99.50th=[ 3306], 99.90th=[ 3339], 99.95th=[ 3339], 00:25:00.315 | 99.99th=[ 3339] 00:25:00.315 bw ( KiB/s): min=45056, max=253952, per=2.84%, avg=126631.38, stdev=70320.26, samples=16 00:25:00.315 iops : min= 44, max= 248, avg=123.62, stdev=68.61, samples=16 00:25:00.315 lat (msec) : 50=0.09%, 100=1.34%, 250=3.21%, 500=4.91%, 750=45.58% 00:25:00.315 lat (msec) : 1000=2.68%, 2000=30.87%, >=2000=11.33% 00:25:00.315 cpu : usr=0.10%, sys=1.69%, ctx=2115, majf=0, minf=32769 00:25:00.315 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:25:00.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.315 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.315 issued rwts: total=1121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.315 job5: (groupid=0, jobs=1): err= 0: pid=3693111: Mon Jun 10 11:33:27 2024 00:25:00.315 read: IOPS=62, BW=62.4MiB/s (65.5MB/s)(627MiB/10042msec) 00:25:00.315 slat (usec): min=30, max=2195.2k, avg=15945.55, stdev=89239.16 00:25:00.315 clat (msec): min=39, max=6630, avg=1909.62, stdev=1850.60 00:25:00.315 lat (msec): min=43, max=6637, avg=1925.56, stdev=1859.67 00:25:00.315 clat percentiles (msec): 00:25:00.315 | 1.00th=[ 51], 5.00th=[ 197], 10.00th=[ 397], 20.00th=[ 743], 00:25:00.315 | 30.00th=[ 751], 40.00th=[ 768], 50.00th=[ 793], 60.00th=[ 961], 00:25:00.315 | 70.00th=[ 2970], 80.00th=[ 3775], 90.00th=[ 4665], 95.00th=[ 6141], 00:25:00.315 | 99.00th=[ 6544], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:25:00.315 | 99.99th=[ 6611] 00:25:00.315 bw ( KiB/s): min= 2048, max=174080, per=1.53%, avg=68263.07, stdev=65928.89, samples=15 00:25:00.315 iops : min= 2, max= 170, avg=66.60, stdev=64.43, samples=15 00:25:00.315 lat (msec) : 50=0.96%, 100=1.75%, 250=4.15%, 500=5.58%, 750=15.15% 00:25:00.315 lat (msec) : 1000=33.01%, 2000=6.38%, >=2000=33.01% 00:25:00.315 cpu : usr=0.07%, sys=1.72%, ctx=1986, majf=0, minf=32769 00:25:00.315 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=90.0% 00:25:00.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.315 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.315 issued rwts: total=627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.315 job5: (groupid=0, jobs=1): err= 0: pid=3693112: Mon Jun 10 11:33:27 2024 00:25:00.315 read: IOPS=69, BW=69.7MiB/s (73.1MB/s)(708MiB/10152msec) 00:25:00.315 slat (usec): min=21, max=2021.8k, avg=14172.38, stdev=96653.35 00:25:00.315 clat (msec): min=115, max=4337, avg=1631.36, stdev=1134.32 00:25:00.315 lat (msec): min=218, max=4552, avg=1645.53, stdev=1137.23 00:25:00.315 clat percentiles (msec): 00:25:00.315 | 1.00th=[ 305], 5.00th=[ 313], 10.00th=[ 575], 20.00th=[ 793], 00:25:00.315 | 30.00th=[ 953], 40.00th=[ 1062], 50.00th=[ 1133], 60.00th=[ 1217], 00:25:00.315 | 70.00th=[ 1921], 80.00th=[ 2601], 90.00th=[ 3675], 95.00th=[ 4245], 00:25:00.315 | 99.00th=[ 4245], 99.50th=[ 4279], 99.90th=[ 4329], 99.95th=[ 4329], 00:25:00.315 | 99.99th=[ 4329] 00:25:00.315 bw ( KiB/s): min= 6144, max=356352, per=2.22%, avg=98986.67, stdev=95315.88, samples=12 00:25:00.315 iops : min= 6, max= 348, avg=96.67, stdev=93.08, samples=12 00:25:00.315 lat (msec) : 250=0.56%, 500=8.47%, 750=8.33%, 1000=17.37%, 2000=36.16% 00:25:00.315 lat (msec) : >=2000=29.10% 00:25:00.315 cpu : usr=0.04%, sys=1.27%, ctx=1211, majf=0, minf=32769 00:25:00.315 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:25:00.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.315 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.315 issued rwts: total=708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.315 job5: (groupid=0, jobs=1): err= 0: pid=3693113: Mon Jun 10 11:33:27 2024 00:25:00.315 read: IOPS=98, BW=98.4MiB/s (103MB/s)(993MiB/10094msec) 00:25:00.315 slat (usec): min=24, max=316524, avg=10148.97, stdev=22522.19 00:25:00.315 clat (msec): min=10, max=2642, avg=1234.96, stdev=634.88 00:25:00.315 lat (msec): min=108, max=2652, avg=1245.11, stdev=635.24 00:25:00.315 clat percentiles (msec): 00:25:00.315 | 1.00th=[ 230], 5.00th=[ 531], 10.00th=[ 625], 20.00th=[ 693], 00:25:00.315 | 30.00th=[ 751], 40.00th=[ 869], 50.00th=[ 1003], 60.00th=[ 1183], 00:25:00.315 | 70.00th=[ 1586], 80.00th=[ 1921], 90.00th=[ 2299], 95.00th=[ 2433], 00:25:00.315 | 99.00th=[ 2567], 99.50th=[ 2601], 99.90th=[ 2635], 99.95th=[ 2635], 00:25:00.315 | 99.99th=[ 2635] 00:25:00.315 bw ( KiB/s): min=14336, max=247808, per=2.09%, avg=93113.53, stdev=73499.96, samples=19 00:25:00.315 iops : min= 14, max= 242, avg=90.84, stdev=71.82, samples=19 00:25:00.315 lat (msec) : 20=0.10%, 250=1.31%, 500=2.22%, 750=26.08%, 1000=20.24% 00:25:00.315 lat (msec) : 2000=30.61%, >=2000=19.44% 00:25:00.315 cpu : usr=0.03%, sys=1.64%, ctx=2096, majf=0, minf=32769 00:25:00.315 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:25:00.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.315 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.315 issued rwts: total=993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.315 job5: (groupid=0, jobs=1): err= 0: pid=3693114: Mon Jun 10 11:33:27 2024 00:25:00.315 read: IOPS=89, BW=89.6MiB/s (94.0MB/s)(907MiB/10121msec) 00:25:00.316 slat (usec): min=28, max=2065.4k, avg=11040.95, stdev=69758.54 00:25:00.316 clat (msec): min=100, max=3101, avg=1333.57, stdev=731.01 00:25:00.316 lat (msec): min=148, max=3114, avg=1344.61, stdev=733.01 00:25:00.316 clat percentiles (msec): 00:25:00.316 | 1.00th=[ 192], 5.00th=[ 435], 10.00th=[ 776], 20.00th=[ 894], 00:25:00.316 | 30.00th=[ 927], 40.00th=[ 978], 50.00th=[ 1036], 60.00th=[ 1116], 00:25:00.316 | 70.00th=[ 1318], 80.00th=[ 1838], 90.00th=[ 2869], 95.00th=[ 2903], 00:25:00.316 | 99.00th=[ 3037], 99.50th=[ 3037], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:00.316 | 99.99th=[ 3104] 00:25:00.316 bw ( KiB/s): min= 2048, max=178176, per=2.37%, avg=105479.53, stdev=46282.55, samples=15 00:25:00.316 iops : min= 2, max= 174, avg=103.00, stdev=45.20, samples=15 00:25:00.316 lat (msec) : 250=1.87%, 500=3.64%, 750=3.75%, 1000=35.94%, 2000=40.68% 00:25:00.316 lat (msec) : >=2000=14.11% 00:25:00.316 cpu : usr=0.07%, sys=1.59%, ctx=1208, majf=0, minf=32769 00:25:00.316 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:25:00.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.316 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.316 issued rwts: total=907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.316 job5: (groupid=0, jobs=1): err= 0: pid=3693115: Mon Jun 10 11:33:27 2024 00:25:00.316 read: IOPS=2, BW=2158KiB/s (2210kB/s)(22.0MiB/10437msec) 00:25:00.316 slat (msec): min=2, max=2117, avg=471.16, stdev=872.51 00:25:00.316 clat (msec): min=71, max=8585, avg=4393.11, stdev=2510.51 00:25:00.316 lat (msec): min=2113, max=10436, avg=4864.27, stdev=2630.56 00:25:00.316 clat percentiles (msec): 00:25:00.316 | 1.00th=[ 71], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2165], 00:25:00.316 | 30.00th=[ 2165], 40.00th=[ 2165], 50.00th=[ 4279], 60.00th=[ 4329], 00:25:00.316 | 70.00th=[ 6409], 80.00th=[ 6477], 90.00th=[ 8557], 95.00th=[ 8557], 00:25:00.316 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:25:00.316 | 99.99th=[ 8557] 00:25:00.316 lat (msec) : 100=4.55%, >=2000=95.45% 00:25:00.316 cpu : usr=0.00%, sys=0.13%, ctx=66, majf=0, minf=5633 00:25:00.316 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:25:00.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.316 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:25:00.316 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.316 job5: (groupid=0, jobs=1): err= 0: pid=3693116: Mon Jun 10 11:33:27 2024 00:25:00.316 read: IOPS=5, BW=5544KiB/s (5678kB/s)(58.0MiB/10712msec) 00:25:00.316 slat (usec): min=893, max=2113.0k, avg=183438.18, stdev=579212.77 00:25:00.316 clat (msec): min=71, max=10703, avg=8334.26, stdev=3298.62 00:25:00.316 lat (msec): min=2110, max=10711, avg=8517.70, stdev=3122.18 00:25:00.316 clat percentiles (msec): 00:25:00.316 | 1.00th=[ 72], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4329], 00:25:00.316 | 30.00th=[ 6477], 40.00th=[10537], 50.00th=[10537], 60.00th=[10671], 00:25:00.316 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:25:00.316 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:25:00.316 | 99.99th=[10671] 00:25:00.316 lat (msec) : 100=1.72%, >=2000=98.28% 00:25:00.316 cpu : usr=0.00%, sys=0.71%, ctx=108, majf=0, minf=14849 00:25:00.316 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:25:00.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.316 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:25:00.316 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.316 job5: (groupid=0, jobs=1): err= 0: pid=3693117: Mon Jun 10 11:33:27 2024 00:25:00.316 read: IOPS=23, BW=24.0MiB/s (25.1MB/s)(244MiB/10186msec) 00:25:00.316 slat (usec): min=85, max=2075.2k, avg=41268.42, stdev=219256.57 00:25:00.316 clat (msec): min=114, max=6566, avg=4228.55, stdev=1744.43 00:25:00.316 lat (msec): min=218, max=6665, avg=4269.82, stdev=1739.56 00:25:00.316 clat percentiles (msec): 00:25:00.316 | 1.00th=[ 224], 5.00th=[ 793], 10.00th=[ 1116], 20.00th=[ 1603], 00:25:00.316 | 30.00th=[ 4396], 40.00th=[ 4530], 50.00th=[ 4732], 60.00th=[ 4933], 00:25:00.316 | 70.00th=[ 5336], 80.00th=[ 5671], 90.00th=[ 5738], 95.00th=[ 5940], 00:25:00.316 | 99.00th=[ 6477], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:25:00.316 | 99.99th=[ 6544] 00:25:00.316 bw ( KiB/s): min= 4096, max=67719, per=0.59%, avg=26415.44, stdev=19609.10, samples=9 00:25:00.316 iops : min= 4, max= 66, avg=25.78, stdev=19.12, samples=9 00:25:00.316 lat (msec) : 250=1.23%, 500=0.82%, 750=2.05%, 1000=3.28%, 2000=13.11% 00:25:00.316 lat (msec) : >=2000=79.51% 00:25:00.316 cpu : usr=0.00%, sys=1.39%, ctx=605, majf=0, minf=32769 00:25:00.316 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.6%, 32=13.1%, >=64=74.2% 00:25:00.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.316 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:00.316 issued rwts: total=244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.316 job5: (groupid=0, jobs=1): err= 0: pid=3693118: Mon Jun 10 11:33:27 2024 00:25:00.316 read: IOPS=75, BW=75.2MiB/s (78.8MB/s)(761MiB/10125msec) 00:25:00.316 slat (usec): min=24, max=1899.2k, avg=13149.13, stdev=88616.76 00:25:00.316 clat (msec): min=115, max=6534, avg=1338.94, stdev=1544.08 00:25:00.316 lat (msec): min=186, max=8433, avg=1352.09, stdev=1561.83 00:25:00.316 clat percentiles (msec): 00:25:00.316 | 1.00th=[ 342], 5.00th=[ 397], 10.00th=[ 397], 20.00th=[ 401], 00:25:00.316 | 30.00th=[ 401], 40.00th=[ 401], 50.00th=[ 430], 60.00th=[ 439], 00:25:00.316 | 70.00th=[ 1318], 80.00th=[ 2534], 90.00th=[ 4530], 95.00th=[ 4799], 00:25:00.316 | 99.00th=[ 5201], 99.50th=[ 5269], 99.90th=[ 6544], 99.95th=[ 6544], 00:25:00.316 | 99.99th=[ 6544] 00:25:00.316 bw ( KiB/s): min= 8192, max=315392, per=2.08%, avg=92745.14, stdev=120684.86, samples=14 00:25:00.316 iops : min= 8, max= 308, avg=90.57, stdev=117.86, samples=14 00:25:00.316 lat (msec) : 250=0.53%, 500=62.55%, 750=2.89%, 1000=1.84%, 2000=9.20% 00:25:00.316 lat (msec) : >=2000=23.00% 00:25:00.316 cpu : usr=0.00%, sys=1.08%, ctx=1201, majf=0, minf=32769 00:25:00.316 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:25:00.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.316 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:00.316 issued rwts: total=761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.316 00:25:00.316 Run status group 0 (all jobs): 00:25:00.316 READ: bw=4350MiB/s (4561MB/s), 1267KiB/s-359MiB/s (1297kB/s-377MB/s), io=45.8GiB (49.1GB), run=10010-10776msec 00:25:00.316 00:25:00.316 Disk stats (read/write): 00:25:00.316 nvme0n1: ios=40621/0, merge=0/0, ticks=5243362/0, in_queue=5243362, util=98.22% 00:25:00.316 nvme1n1: ios=56798/0, merge=0/0, ticks=6459394/0, in_queue=6459394, util=98.33% 00:25:00.316 nvme2n1: ios=58111/0, merge=0/0, ticks=7257424/0, in_queue=7257424, util=98.55% 00:25:00.316 nvme3n1: ios=64086/0, merge=0/0, ticks=7170748/0, in_queue=7170748, util=98.79% 00:25:00.316 nvme4n1: ios=72490/0, merge=0/0, ticks=7542896/0, in_queue=7542896, util=98.97% 00:25:00.316 nvme5n1: ios=80905/0, merge=0/0, ticks=7390553/0, in_queue=7390553, util=98.61% 00:25:00.316 11:33:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:25:00.316 11:33:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:25:00.316 11:33:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:00.316 11:33:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:25:00.887 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:25:00.887 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:25:00.887 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:25:00.887 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000000 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000000 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:00.888 11:33:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:02.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:02.273 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000001 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000001 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:02.274 11:33:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:03.656 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000002 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000002 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:03.656 11:33:32 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:05.039 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000003 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000003 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:05.039 11:33:33 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:06.427 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:06.427 11:33:34 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:25:06.427 11:33:34 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:25:06.427 11:33:34 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:06.427 11:33:34 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000004 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000004 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:06.427 11:33:35 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:07.823 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000005 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000005 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:07.823 rmmod nvme_rdma 00:25:07.823 rmmod nvme_fabrics 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 3690192 ']' 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 3690192 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@949 -- # '[' -z 3690192 ']' 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # kill -0 3690192 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # uname 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3690192 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3690192' 00:25:07.823 killing process with pid 3690192 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # kill 3690192 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # wait 3690192 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:07.823 00:25:07.823 real 0m37.827s 00:25:07.823 user 2m18.419s 00:25:07.823 sys 0m17.279s 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:07.823 11:33:36 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:07.823 ************************************ 00:25:07.823 END TEST nvmf_srq_overwhelm 00:25:07.823 ************************************ 00:25:08.084 11:33:36 nvmf_rdma -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:08.084 11:33:36 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:08.084 11:33:36 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:08.084 11:33:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:08.084 ************************************ 00:25:08.084 START TEST nvmf_shutdown 00:25:08.084 ************************************ 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:08.084 * Looking for test storage... 00:25:08.084 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:08.084 11:33:36 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:08.084 ************************************ 00:25:08.084 START TEST nvmf_shutdown_tc1 00:25:08.084 ************************************ 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:08.084 11:33:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:16.257 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:16.257 11:33:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:16.257 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:16.257 Found net devices under 0000:98:00.0: mlx_0_0 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:16.257 Found net devices under 0000:98:00.1: mlx_0_1 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:16.257 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:16.258 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:16.258 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:25:16.258 altname enp152s0f0np0 00:25:16.258 altname ens817f0np0 00:25:16.258 inet 192.168.100.8/24 scope global mlx_0_0 00:25:16.258 valid_lft forever preferred_lft forever 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:16.258 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:16.258 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:25:16.258 altname enp152s0f1np1 00:25:16.258 altname ens817f1np1 00:25:16.258 inet 192.168.100.9/24 scope global mlx_0_1 00:25:16.258 valid_lft forever preferred_lft forever 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:16.258 192.168.100.9' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:16.258 192.168.100.9' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:16.258 192.168.100.9' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3700493 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3700493 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 3700493 ']' 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:16.258 11:33:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:16.258 [2024-06-10 11:33:44.299067] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:16.259 [2024-06-10 11:33:44.299131] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.259 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.259 [2024-06-10 11:33:44.380372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.259 [2024-06-10 11:33:44.475186] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.259 [2024-06-10 11:33:44.475244] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.259 [2024-06-10 11:33:44.475252] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.259 [2024-06-10 11:33:44.475259] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.259 [2024-06-10 11:33:44.475265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.259 [2024-06-10 11:33:44.475396] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.259 [2024-06-10 11:33:44.475561] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.259 [2024-06-10 11:33:44.475726] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.259 [2024-06-10 11:33:44.475728] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.259 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:16.259 [2024-06-10 11:33:45.171386] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x556320/0x55a810) succeed. 00:25:16.259 [2024-06-10 11:33:45.185683] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x557960/0x59bea0) succeed. 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.519 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.520 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:16.520 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:16.520 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:16.520 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.520 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:16.520 Malloc1 00:25:16.520 [2024-06-10 11:33:45.416079] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:16.520 Malloc2 00:25:16.520 Malloc3 00:25:16.780 Malloc4 00:25:16.780 Malloc5 00:25:16.780 Malloc6 00:25:16.780 Malloc7 00:25:16.780 Malloc8 00:25:16.780 Malloc9 00:25:17.042 Malloc10 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3700844 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3700844 /var/tmp/bdevperf.sock 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 3700844 ']' 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.042 { 00:25:17.042 "params": { 00:25:17.042 "name": "Nvme$subsystem", 00:25:17.042 "trtype": "$TEST_TRANSPORT", 00:25:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.042 "adrfam": "ipv4", 00:25:17.042 "trsvcid": "$NVMF_PORT", 00:25:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.042 "hdgst": ${hdgst:-false}, 00:25:17.042 "ddgst": ${ddgst:-false} 00:25:17.042 }, 00:25:17.042 "method": "bdev_nvme_attach_controller" 00:25:17.042 } 00:25:17.042 EOF 00:25:17.042 )") 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.042 { 00:25:17.042 "params": { 00:25:17.042 "name": "Nvme$subsystem", 00:25:17.042 "trtype": "$TEST_TRANSPORT", 00:25:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.042 "adrfam": "ipv4", 00:25:17.042 "trsvcid": "$NVMF_PORT", 00:25:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.042 "hdgst": ${hdgst:-false}, 00:25:17.042 "ddgst": ${ddgst:-false} 00:25:17.042 }, 00:25:17.042 "method": "bdev_nvme_attach_controller" 00:25:17.042 } 00:25:17.042 EOF 00:25:17.042 )") 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.042 { 00:25:17.042 "params": { 00:25:17.042 "name": "Nvme$subsystem", 00:25:17.042 "trtype": "$TEST_TRANSPORT", 00:25:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.042 "adrfam": "ipv4", 00:25:17.042 "trsvcid": "$NVMF_PORT", 00:25:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.042 "hdgst": ${hdgst:-false}, 00:25:17.042 "ddgst": ${ddgst:-false} 00:25:17.042 }, 00:25:17.042 "method": "bdev_nvme_attach_controller" 00:25:17.042 } 00:25:17.042 EOF 00:25:17.042 )") 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.042 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.042 { 00:25:17.042 "params": { 00:25:17.042 "name": "Nvme$subsystem", 00:25:17.042 "trtype": "$TEST_TRANSPORT", 00:25:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.042 "adrfam": "ipv4", 00:25:17.042 "trsvcid": "$NVMF_PORT", 00:25:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.042 "hdgst": ${hdgst:-false}, 00:25:17.042 "ddgst": ${ddgst:-false} 00:25:17.042 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 } 00:25:17.043 EOF 00:25:17.043 )") 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.043 { 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme$subsystem", 00:25:17.043 "trtype": "$TEST_TRANSPORT", 00:25:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "$NVMF_PORT", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.043 "hdgst": ${hdgst:-false}, 00:25:17.043 "ddgst": ${ddgst:-false} 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 } 00:25:17.043 EOF 00:25:17.043 )") 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.043 { 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme$subsystem", 00:25:17.043 "trtype": "$TEST_TRANSPORT", 00:25:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "$NVMF_PORT", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.043 "hdgst": ${hdgst:-false}, 00:25:17.043 "ddgst": ${ddgst:-false} 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 } 00:25:17.043 EOF 00:25:17.043 )") 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.043 { 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme$subsystem", 00:25:17.043 "trtype": "$TEST_TRANSPORT", 00:25:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "$NVMF_PORT", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.043 "hdgst": ${hdgst:-false}, 00:25:17.043 "ddgst": ${ddgst:-false} 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 } 00:25:17.043 EOF 00:25:17.043 )") 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.043 [2024-06-10 11:33:45.883223] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:17.043 [2024-06-10 11:33:45.883298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.043 { 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme$subsystem", 00:25:17.043 "trtype": "$TEST_TRANSPORT", 00:25:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "$NVMF_PORT", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.043 "hdgst": ${hdgst:-false}, 00:25:17.043 "ddgst": ${ddgst:-false} 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 } 00:25:17.043 EOF 00:25:17.043 )") 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.043 { 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme$subsystem", 00:25:17.043 "trtype": "$TEST_TRANSPORT", 00:25:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "$NVMF_PORT", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.043 "hdgst": ${hdgst:-false}, 00:25:17.043 "ddgst": ${ddgst:-false} 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 } 00:25:17.043 EOF 00:25:17.043 )") 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:17.043 { 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme$subsystem", 00:25:17.043 "trtype": "$TEST_TRANSPORT", 00:25:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "$NVMF_PORT", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.043 "hdgst": ${hdgst:-false}, 00:25:17.043 "ddgst": ${ddgst:-false} 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 } 00:25:17.043 EOF 00:25:17.043 )") 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:25:17.043 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.043 11:33:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme1", 00:25:17.043 "trtype": "rdma", 00:25:17.043 "traddr": "192.168.100.8", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "4420", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:17.043 "hdgst": false, 00:25:17.043 "ddgst": false 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 },{ 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme2", 00:25:17.043 "trtype": "rdma", 00:25:17.043 "traddr": "192.168.100.8", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "4420", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:17.043 "hdgst": false, 00:25:17.043 "ddgst": false 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 },{ 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme3", 00:25:17.043 "trtype": "rdma", 00:25:17.043 "traddr": "192.168.100.8", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "4420", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:17.043 "hdgst": false, 00:25:17.043 "ddgst": false 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 },{ 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme4", 00:25:17.043 "trtype": "rdma", 00:25:17.043 "traddr": "192.168.100.8", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "4420", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:17.043 "hdgst": false, 00:25:17.043 "ddgst": false 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 },{ 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme5", 00:25:17.043 "trtype": "rdma", 00:25:17.043 "traddr": "192.168.100.8", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "4420", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:17.043 "hdgst": false, 00:25:17.043 "ddgst": false 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 },{ 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme6", 00:25:17.043 "trtype": "rdma", 00:25:17.043 "traddr": "192.168.100.8", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "4420", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:17.043 "hdgst": false, 00:25:17.043 "ddgst": false 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 },{ 00:25:17.043 "params": { 00:25:17.043 "name": "Nvme7", 00:25:17.043 "trtype": "rdma", 00:25:17.043 "traddr": "192.168.100.8", 00:25:17.043 "adrfam": "ipv4", 00:25:17.043 "trsvcid": "4420", 00:25:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:17.043 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:17.043 "hdgst": false, 00:25:17.043 "ddgst": false 00:25:17.043 }, 00:25:17.043 "method": "bdev_nvme_attach_controller" 00:25:17.043 },{ 00:25:17.043 "params": { 00:25:17.044 "name": "Nvme8", 00:25:17.044 "trtype": "rdma", 00:25:17.044 "traddr": "192.168.100.8", 00:25:17.044 "adrfam": "ipv4", 00:25:17.044 "trsvcid": "4420", 00:25:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:17.044 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:17.044 "hdgst": false, 00:25:17.044 "ddgst": false 00:25:17.044 }, 00:25:17.044 "method": "bdev_nvme_attach_controller" 00:25:17.044 },{ 00:25:17.044 "params": { 00:25:17.044 "name": "Nvme9", 00:25:17.044 "trtype": "rdma", 00:25:17.044 "traddr": "192.168.100.8", 00:25:17.044 "adrfam": "ipv4", 00:25:17.044 "trsvcid": "4420", 00:25:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:17.044 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:17.044 "hdgst": false, 00:25:17.044 "ddgst": false 00:25:17.044 }, 00:25:17.044 "method": "bdev_nvme_attach_controller" 00:25:17.044 },{ 00:25:17.044 "params": { 00:25:17.044 "name": "Nvme10", 00:25:17.044 "trtype": "rdma", 00:25:17.044 "traddr": "192.168.100.8", 00:25:17.044 "adrfam": "ipv4", 00:25:17.044 "trsvcid": "4420", 00:25:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:17.044 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:17.044 "hdgst": false, 00:25:17.044 "ddgst": false 00:25:17.044 }, 00:25:17.044 "method": "bdev_nvme_attach_controller" 00:25:17.044 }' 00:25:17.044 [2024-06-10 11:33:45.946383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.044 [2024-06-10 11:33:46.010651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3700844 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:17.984 11:33:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:25:18.926 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3700844 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3700493 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:18.926 { 00:25:18.926 "params": { 00:25:18.926 "name": "Nvme$subsystem", 00:25:18.926 "trtype": "$TEST_TRANSPORT", 00:25:18.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.926 "adrfam": "ipv4", 00:25:18.926 "trsvcid": "$NVMF_PORT", 00:25:18.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.926 "hdgst": ${hdgst:-false}, 00:25:18.926 "ddgst": ${ddgst:-false} 00:25:18.926 }, 00:25:18.926 "method": "bdev_nvme_attach_controller" 00:25:18.926 } 00:25:18.926 EOF 00:25:18.926 )") 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:18.926 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:18.926 { 00:25:18.926 "params": { 00:25:18.926 "name": "Nvme$subsystem", 00:25:18.926 "trtype": "$TEST_TRANSPORT", 00:25:18.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.926 "adrfam": "ipv4", 00:25:18.926 "trsvcid": "$NVMF_PORT", 00:25:18.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.926 "hdgst": ${hdgst:-false}, 00:25:18.926 "ddgst": ${ddgst:-false} 00:25:18.926 }, 00:25:18.926 "method": "bdev_nvme_attach_controller" 00:25:18.926 } 00:25:18.926 EOF 00:25:18.926 )") 00:25:19.187 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.187 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.187 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.187 { 00:25:19.187 "params": { 00:25:19.187 "name": "Nvme$subsystem", 00:25:19.187 "trtype": "$TEST_TRANSPORT", 00:25:19.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.187 "adrfam": "ipv4", 00:25:19.187 "trsvcid": "$NVMF_PORT", 00:25:19.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.187 "hdgst": ${hdgst:-false}, 00:25:19.188 "ddgst": ${ddgst:-false} 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 } 00:25:19.188 EOF 00:25:19.188 )") 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.188 { 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme$subsystem", 00:25:19.188 "trtype": "$TEST_TRANSPORT", 00:25:19.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "$NVMF_PORT", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.188 "hdgst": ${hdgst:-false}, 00:25:19.188 "ddgst": ${ddgst:-false} 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 } 00:25:19.188 EOF 00:25:19.188 )") 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.188 { 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme$subsystem", 00:25:19.188 "trtype": "$TEST_TRANSPORT", 00:25:19.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "$NVMF_PORT", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.188 "hdgst": ${hdgst:-false}, 00:25:19.188 "ddgst": ${ddgst:-false} 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 } 00:25:19.188 EOF 00:25:19.188 )") 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.188 { 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme$subsystem", 00:25:19.188 "trtype": "$TEST_TRANSPORT", 00:25:19.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "$NVMF_PORT", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.188 "hdgst": ${hdgst:-false}, 00:25:19.188 "ddgst": ${ddgst:-false} 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 } 00:25:19.188 EOF 00:25:19.188 )") 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.188 [2024-06-10 11:33:47.930454] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:19.188 [2024-06-10 11:33:47.930506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701241 ] 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.188 { 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme$subsystem", 00:25:19.188 "trtype": "$TEST_TRANSPORT", 00:25:19.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "$NVMF_PORT", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.188 "hdgst": ${hdgst:-false}, 00:25:19.188 "ddgst": ${ddgst:-false} 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 } 00:25:19.188 EOF 00:25:19.188 )") 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.188 { 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme$subsystem", 00:25:19.188 "trtype": "$TEST_TRANSPORT", 00:25:19.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "$NVMF_PORT", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.188 "hdgst": ${hdgst:-false}, 00:25:19.188 "ddgst": ${ddgst:-false} 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 } 00:25:19.188 EOF 00:25:19.188 )") 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.188 { 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme$subsystem", 00:25:19.188 "trtype": "$TEST_TRANSPORT", 00:25:19.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "$NVMF_PORT", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.188 "hdgst": ${hdgst:-false}, 00:25:19.188 "ddgst": ${ddgst:-false} 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 } 00:25:19.188 EOF 00:25:19.188 )") 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.188 { 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme$subsystem", 00:25:19.188 "trtype": "$TEST_TRANSPORT", 00:25:19.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "$NVMF_PORT", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.188 "hdgst": ${hdgst:-false}, 00:25:19.188 "ddgst": ${ddgst:-false} 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 } 00:25:19.188 EOF 00:25:19.188 )") 00:25:19.188 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:25:19.188 11:33:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme1", 00:25:19.188 "trtype": "rdma", 00:25:19.188 "traddr": "192.168.100.8", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "4420", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.188 "hdgst": false, 00:25:19.188 "ddgst": false 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 },{ 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme2", 00:25:19.188 "trtype": "rdma", 00:25:19.188 "traddr": "192.168.100.8", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "4420", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:19.188 "hdgst": false, 00:25:19.188 "ddgst": false 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 },{ 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme3", 00:25:19.188 "trtype": "rdma", 00:25:19.188 "traddr": "192.168.100.8", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "4420", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:19.188 "hdgst": false, 00:25:19.188 "ddgst": false 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 },{ 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme4", 00:25:19.188 "trtype": "rdma", 00:25:19.188 "traddr": "192.168.100.8", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "4420", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:19.188 "hdgst": false, 00:25:19.188 "ddgst": false 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 },{ 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme5", 00:25:19.188 "trtype": "rdma", 00:25:19.188 "traddr": "192.168.100.8", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "4420", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:19.188 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:19.188 "hdgst": false, 00:25:19.188 "ddgst": false 00:25:19.188 }, 00:25:19.188 "method": "bdev_nvme_attach_controller" 00:25:19.188 },{ 00:25:19.188 "params": { 00:25:19.188 "name": "Nvme6", 00:25:19.188 "trtype": "rdma", 00:25:19.188 "traddr": "192.168.100.8", 00:25:19.188 "adrfam": "ipv4", 00:25:19.188 "trsvcid": "4420", 00:25:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:19.189 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:19.189 "hdgst": false, 00:25:19.189 "ddgst": false 00:25:19.189 }, 00:25:19.189 "method": "bdev_nvme_attach_controller" 00:25:19.189 },{ 00:25:19.189 "params": { 00:25:19.189 "name": "Nvme7", 00:25:19.189 "trtype": "rdma", 00:25:19.189 "traddr": "192.168.100.8", 00:25:19.189 "adrfam": "ipv4", 00:25:19.189 "trsvcid": "4420", 00:25:19.189 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:19.189 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:19.189 "hdgst": false, 00:25:19.189 "ddgst": false 00:25:19.189 }, 00:25:19.189 "method": "bdev_nvme_attach_controller" 00:25:19.189 },{ 00:25:19.189 "params": { 00:25:19.189 "name": "Nvme8", 00:25:19.189 "trtype": "rdma", 00:25:19.189 "traddr": "192.168.100.8", 00:25:19.189 "adrfam": "ipv4", 00:25:19.189 "trsvcid": "4420", 00:25:19.189 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:19.189 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:19.189 "hdgst": false, 00:25:19.189 "ddgst": false 00:25:19.189 }, 00:25:19.189 "method": "bdev_nvme_attach_controller" 00:25:19.189 },{ 00:25:19.189 "params": { 00:25:19.189 "name": "Nvme9", 00:25:19.189 "trtype": "rdma", 00:25:19.189 "traddr": "192.168.100.8", 00:25:19.189 "adrfam": "ipv4", 00:25:19.189 "trsvcid": "4420", 00:25:19.189 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:19.189 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:19.189 "hdgst": false, 00:25:19.189 "ddgst": false 00:25:19.189 }, 00:25:19.189 "method": "bdev_nvme_attach_controller" 00:25:19.189 },{ 00:25:19.189 "params": { 00:25:19.189 "name": "Nvme10", 00:25:19.189 "trtype": "rdma", 00:25:19.189 "traddr": "192.168.100.8", 00:25:19.189 "adrfam": "ipv4", 00:25:19.189 "trsvcid": "4420", 00:25:19.189 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:19.189 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:19.189 "hdgst": false, 00:25:19.189 "ddgst": false 00:25:19.189 }, 00:25:19.189 "method": "bdev_nvme_attach_controller" 00:25:19.189 }' 00:25:19.189 [2024-06-10 11:33:47.991352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.189 [2024-06-10 11:33:48.055490] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.131 Running I/O for 1 seconds... 00:25:21.518 00:25:21.518 Latency(us) 00:25:21.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.518 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme1n1 : 1.21 277.41 17.34 0.00 0.00 224725.66 13653.33 221948.59 00:25:21.518 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme2n1 : 1.21 277.13 17.32 0.00 0.00 221263.64 16711.68 208841.39 00:25:21.518 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme3n1 : 1.21 294.11 18.38 0.00 0.00 205926.74 2867.20 193986.56 00:25:21.518 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme4n1 : 1.22 315.90 19.74 0.00 0.00 189628.87 4860.59 170393.60 00:25:21.518 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme5n1 : 1.23 313.02 19.56 0.00 0.00 188049.35 9229.65 179131.73 00:25:21.518 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme6n1 : 1.22 315.13 19.70 0.00 0.00 184430.93 15947.09 149422.08 00:25:21.518 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme7n1 : 1.22 314.63 19.66 0.00 0.00 181414.68 16930.13 130198.19 00:25:21.518 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme8n1 : 1.22 314.13 19.63 0.00 0.00 178404.98 18022.40 118838.61 00:25:21.518 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme9n1 : 1.23 312.72 19.54 0.00 0.00 175538.35 3577.17 172141.23 00:25:21.518 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.518 Verification LBA range: start 0x0 length 0x400 00:25:21.518 Nvme10n1 : 1.23 261.14 16.32 0.00 0.00 206551.38 14308.69 230686.72 00:25:21.518 =================================================================================================================== 00:25:21.518 Total : 2995.31 187.21 0.00 0.00 194605.01 2867.20 230686.72 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:21.518 rmmod nvme_rdma 00:25:21.518 rmmod nvme_fabrics 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3700493 ']' 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3700493 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 3700493 ']' 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 3700493 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:21.518 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3700493 00:25:21.779 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:21.779 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:21.779 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3700493' 00:25:21.779 killing process with pid 3700493 00:25:21.779 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 3700493 00:25:21.779 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 3700493 00:25:22.040 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.040 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:22.040 00:25:22.040 real 0m13.802s 00:25:22.040 user 0m30.310s 00:25:22.040 sys 0m6.254s 00:25:22.040 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:22.041 ************************************ 00:25:22.041 END TEST nvmf_shutdown_tc1 00:25:22.041 ************************************ 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:22.041 ************************************ 00:25:22.041 START TEST nvmf_shutdown_tc2 00:25:22.041 ************************************ 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:22.041 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:22.041 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:22.041 Found net devices under 0000:98:00.0: mlx_0_0 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:22.041 Found net devices under 0000:98:00.1: mlx_0_1 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:22.041 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.042 11:33:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:22.042 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:22.042 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:25:22.042 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:22.042 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:22.042 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:22.042 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:22.042 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:22.042 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:22.302 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:22.302 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:25:22.302 altname enp152s0f0np0 00:25:22.302 altname ens817f0np0 00:25:22.302 inet 192.168.100.8/24 scope global mlx_0_0 00:25:22.302 valid_lft forever preferred_lft forever 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:22.302 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:22.302 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:25:22.302 altname enp152s0f1np1 00:25:22.302 altname ens817f1np1 00:25:22.302 inet 192.168.100.9/24 scope global mlx_0_1 00:25:22.302 valid_lft forever preferred_lft forever 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:22.302 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:22.303 192.168.100.9' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:22.303 192.168.100.9' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:22.303 192.168.100.9' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3702012 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3702012 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3702012 ']' 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:22.303 11:33:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.303 [2024-06-10 11:33:51.204369] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:22.303 [2024-06-10 11:33:51.204435] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.303 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.563 [2024-06-10 11:33:51.284231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:22.563 [2024-06-10 11:33:51.338495] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.563 [2024-06-10 11:33:51.338526] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.563 [2024-06-10 11:33:51.338532] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.563 [2024-06-10 11:33:51.338537] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.563 [2024-06-10 11:33:51.338540] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.563 [2024-06-10 11:33:51.338652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.563 [2024-06-10 11:33:51.338811] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.563 [2024-06-10 11:33:51.339157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.563 [2024-06-10 11:33:51.339158] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.133 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:23.133 [2024-06-10 11:33:52.095430] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14da320/0x14de810) succeed. 00:25:23.394 [2024-06-10 11:33:52.106400] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14db960/0x151fea0) succeed. 00:25:23.394 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.394 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:23.394 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.395 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:23.395 Malloc1 00:25:23.395 [2024-06-10 11:33:52.301031] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:23.395 Malloc2 00:25:23.395 Malloc3 00:25:23.655 Malloc4 00:25:23.655 Malloc5 00:25:23.655 Malloc6 00:25:23.655 Malloc7 00:25:23.655 Malloc8 00:25:23.655 Malloc9 00:25:23.917 Malloc10 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3702324 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3702324 /var/tmp/bdevperf.sock 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3702324 ']' 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.917 { 00:25:23.917 "params": { 00:25:23.917 "name": "Nvme$subsystem", 00:25:23.917 "trtype": "$TEST_TRANSPORT", 00:25:23.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.917 "adrfam": "ipv4", 00:25:23.917 "trsvcid": "$NVMF_PORT", 00:25:23.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.917 "hdgst": ${hdgst:-false}, 00:25:23.917 "ddgst": ${ddgst:-false} 00:25:23.917 }, 00:25:23.917 "method": "bdev_nvme_attach_controller" 00:25:23.917 } 00:25:23.917 EOF 00:25:23.917 )") 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.917 { 00:25:23.917 "params": { 00:25:23.917 "name": "Nvme$subsystem", 00:25:23.917 "trtype": "$TEST_TRANSPORT", 00:25:23.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.917 "adrfam": "ipv4", 00:25:23.917 "trsvcid": "$NVMF_PORT", 00:25:23.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.917 "hdgst": ${hdgst:-false}, 00:25:23.917 "ddgst": ${ddgst:-false} 00:25:23.917 }, 00:25:23.917 "method": "bdev_nvme_attach_controller" 00:25:23.917 } 00:25:23.917 EOF 00:25:23.917 )") 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.917 { 00:25:23.917 "params": { 00:25:23.917 "name": "Nvme$subsystem", 00:25:23.917 "trtype": "$TEST_TRANSPORT", 00:25:23.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.917 "adrfam": "ipv4", 00:25:23.917 "trsvcid": "$NVMF_PORT", 00:25:23.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.917 "hdgst": ${hdgst:-false}, 00:25:23.917 "ddgst": ${ddgst:-false} 00:25:23.917 }, 00:25:23.917 "method": "bdev_nvme_attach_controller" 00:25:23.917 } 00:25:23.917 EOF 00:25:23.917 )") 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.917 { 00:25:23.917 "params": { 00:25:23.917 "name": "Nvme$subsystem", 00:25:23.917 "trtype": "$TEST_TRANSPORT", 00:25:23.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.917 "adrfam": "ipv4", 00:25:23.917 "trsvcid": "$NVMF_PORT", 00:25:23.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.917 "hdgst": ${hdgst:-false}, 00:25:23.917 "ddgst": ${ddgst:-false} 00:25:23.917 }, 00:25:23.917 "method": "bdev_nvme_attach_controller" 00:25:23.917 } 00:25:23.917 EOF 00:25:23.917 )") 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.917 { 00:25:23.917 "params": { 00:25:23.917 "name": "Nvme$subsystem", 00:25:23.917 "trtype": "$TEST_TRANSPORT", 00:25:23.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.917 "adrfam": "ipv4", 00:25:23.917 "trsvcid": "$NVMF_PORT", 00:25:23.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.917 "hdgst": ${hdgst:-false}, 00:25:23.917 "ddgst": ${ddgst:-false} 00:25:23.917 }, 00:25:23.917 "method": "bdev_nvme_attach_controller" 00:25:23.917 } 00:25:23.917 EOF 00:25:23.917 )") 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.917 { 00:25:23.917 "params": { 00:25:23.917 "name": "Nvme$subsystem", 00:25:23.917 "trtype": "$TEST_TRANSPORT", 00:25:23.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.917 "adrfam": "ipv4", 00:25:23.917 "trsvcid": "$NVMF_PORT", 00:25:23.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.917 "hdgst": ${hdgst:-false}, 00:25:23.917 "ddgst": ${ddgst:-false} 00:25:23.917 }, 00:25:23.917 "method": "bdev_nvme_attach_controller" 00:25:23.917 } 00:25:23.917 EOF 00:25:23.917 )") 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.917 [2024-06-10 11:33:52.748971] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:23.917 [2024-06-10 11:33:52.749021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702324 ] 00:25:23.917 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.918 { 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme$subsystem", 00:25:23.918 "trtype": "$TEST_TRANSPORT", 00:25:23.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "$NVMF_PORT", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.918 "hdgst": ${hdgst:-false}, 00:25:23.918 "ddgst": ${ddgst:-false} 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 } 00:25:23.918 EOF 00:25:23.918 )") 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.918 { 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme$subsystem", 00:25:23.918 "trtype": "$TEST_TRANSPORT", 00:25:23.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "$NVMF_PORT", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.918 "hdgst": ${hdgst:-false}, 00:25:23.918 "ddgst": ${ddgst:-false} 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 } 00:25:23.918 EOF 00:25:23.918 )") 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.918 { 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme$subsystem", 00:25:23.918 "trtype": "$TEST_TRANSPORT", 00:25:23.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "$NVMF_PORT", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.918 "hdgst": ${hdgst:-false}, 00:25:23.918 "ddgst": ${ddgst:-false} 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 } 00:25:23.918 EOF 00:25:23.918 )") 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.918 { 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme$subsystem", 00:25:23.918 "trtype": "$TEST_TRANSPORT", 00:25:23.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "$NVMF_PORT", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.918 "hdgst": ${hdgst:-false}, 00:25:23.918 "ddgst": ${ddgst:-false} 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 } 00:25:23.918 EOF 00:25:23.918 )") 00:25:23.918 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:25:23.918 11:33:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme1", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme2", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme3", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme4", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme5", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme6", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme7", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme8", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme9", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 },{ 00:25:23.918 "params": { 00:25:23.918 "name": "Nvme10", 00:25:23.918 "trtype": "rdma", 00:25:23.918 "traddr": "192.168.100.8", 00:25:23.918 "adrfam": "ipv4", 00:25:23.918 "trsvcid": "4420", 00:25:23.918 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:23.918 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:23.918 "hdgst": false, 00:25:23.918 "ddgst": false 00:25:23.918 }, 00:25:23.918 "method": "bdev_nvme_attach_controller" 00:25:23.918 }' 00:25:23.918 [2024-06-10 11:33:52.809891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.918 [2024-06-10 11:33:52.874468] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.860 Running I/O for 10 seconds... 00:25:24.860 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:24.860 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:25:24.860 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:24.860 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.860 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.120 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.120 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:25.120 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:25.120 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:25.120 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:25:25.121 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:25:25.121 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:25.121 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:25.121 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:25.121 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:25.121 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.121 11:33:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.381 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.381 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:25.381 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:25.381 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:25.641 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:25.641 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:25.641 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=155 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 155 -ge 100 ']' 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3702324 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 3702324 ']' 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 3702324 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:25.642 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3702324 00:25:25.902 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:25.902 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:25.902 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3702324' 00:25:25.902 killing process with pid 3702324 00:25:25.902 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 3702324 00:25:25.902 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 3702324 00:25:25.902 Received shutdown signal, test time was about 0.992233 seconds 00:25:25.902 00:25:25.902 Latency(us) 00:25:25.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.902 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.902 Verification LBA range: start 0x0 length 0x400 00:25:25.902 Nvme1n1 : 0.98 286.47 17.90 0.00 0.00 218638.24 10048.85 248162.99 00:25:25.902 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.902 Verification LBA range: start 0x0 length 0x400 00:25:25.902 Nvme2n1 : 0.98 294.23 18.39 0.00 0.00 208922.55 10485.76 239424.85 00:25:25.902 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.902 Verification LBA range: start 0x0 length 0x400 00:25:25.902 Nvme3n1 : 0.98 326.39 20.40 0.00 0.00 184909.53 3713.71 174762.67 00:25:25.902 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.902 Verification LBA range: start 0x0 length 0x400 00:25:25.903 Nvme4n1 : 0.98 325.91 20.37 0.00 0.00 181088.51 11250.35 166024.53 00:25:25.903 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.903 Verification LBA range: start 0x0 length 0x400 00:25:25.903 Nvme5n1 : 0.98 325.30 20.33 0.00 0.00 178476.20 12342.61 150295.89 00:25:25.903 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.903 Verification LBA range: start 0x0 length 0x400 00:25:25.903 Nvme6n1 : 0.99 324.70 20.29 0.00 0.00 174953.05 13216.43 133693.44 00:25:25.903 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.903 Verification LBA range: start 0x0 length 0x400 00:25:25.903 Nvme7n1 : 0.99 324.19 20.26 0.00 0.00 170642.52 13817.17 123207.68 00:25:25.903 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.903 Verification LBA range: start 0x0 length 0x400 00:25:25.903 Nvme8n1 : 0.99 323.57 20.22 0.00 0.00 168019.29 14745.60 134567.25 00:25:25.903 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.903 Verification LBA range: start 0x0 length 0x400 00:25:25.903 Nvme9n1 : 0.99 322.94 20.18 0.00 0.00 164594.43 9721.17 152917.33 00:25:25.903 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.903 Verification LBA range: start 0x0 length 0x400 00:25:25.903 Nvme10n1 : 0.98 131.14 8.20 0.00 0.00 394264.32 9721.17 580212.05 00:25:25.903 =================================================================================================================== 00:25:25.903 Total : 2984.84 186.55 0.00 0.00 191795.97 3713.71 580212.05 00:25:26.164 11:33:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:25:27.106 11:33:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3702012 00:25:27.106 11:33:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:25:27.106 11:33:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:27.106 11:33:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:27.106 11:33:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:27.106 11:33:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:27.106 11:33:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.106 11:33:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:27.106 rmmod nvme_rdma 00:25:27.106 rmmod nvme_fabrics 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3702012 ']' 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3702012 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 3702012 ']' 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 3702012 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:27.106 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3702012 00:25:27.368 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:27.368 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:27.368 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3702012' 00:25:27.368 killing process with pid 3702012 00:25:27.368 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 3702012 00:25:27.368 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 3702012 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:27.629 00:25:27.629 real 0m5.518s 00:25:27.629 user 0m22.452s 00:25:27.629 sys 0m0.981s 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.629 ************************************ 00:25:27.629 END TEST nvmf_shutdown_tc2 00:25:27.629 ************************************ 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:27.629 ************************************ 00:25:27.629 START TEST nvmf_shutdown_tc3 00:25:27.629 ************************************ 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.629 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:27.630 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:27.630 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:27.630 Found net devices under 0000:98:00.0: mlx_0_0 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:27.630 Found net devices under 0000:98:00.1: mlx_0_1 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:27.630 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:27.892 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:27.892 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:25:27.892 altname enp152s0f0np0 00:25:27.892 altname ens817f0np0 00:25:27.892 inet 192.168.100.8/24 scope global mlx_0_0 00:25:27.892 valid_lft forever preferred_lft forever 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:27.892 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:27.892 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:25:27.892 altname enp152s0f1np1 00:25:27.892 altname ens817f1np1 00:25:27.892 inet 192.168.100.9/24 scope global mlx_0_1 00:25:27.892 valid_lft forever preferred_lft forever 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:27.892 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:27.893 192.168.100.9' 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:27.893 192.168.100.9' 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:27.893 192.168.100.9' 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3703207 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3703207 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 3703207 ']' 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:27.893 11:33:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:27.893 [2024-06-10 11:33:56.791197] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:27.893 [2024-06-10 11:33:56.791258] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.893 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.153 [2024-06-10 11:33:56.873445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.153 [2024-06-10 11:33:56.934663] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.153 [2024-06-10 11:33:56.934697] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.153 [2024-06-10 11:33:56.934702] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.153 [2024-06-10 11:33:56.934707] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.153 [2024-06-10 11:33:56.934710] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.153 [2024-06-10 11:33:56.934824] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.153 [2024-06-10 11:33:56.935098] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.153 [2024-06-10 11:33:56.935258] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.153 [2024-06-10 11:33:56.935259] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.723 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:28.723 [2024-06-10 11:33:57.636413] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11af320/0x11b3810) succeed. 00:25:28.723 [2024-06-10 11:33:57.647795] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11b0960/0x11f4ea0) succeed. 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.983 11:33:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:28.983 Malloc1 00:25:28.983 [2024-06-10 11:33:57.842406] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:28.983 Malloc2 00:25:28.983 Malloc3 00:25:28.983 Malloc4 00:25:29.244 Malloc5 00:25:29.244 Malloc6 00:25:29.244 Malloc7 00:25:29.244 Malloc8 00:25:29.244 Malloc9 00:25:29.244 Malloc10 00:25:29.244 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.244 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:29.244 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:29.244 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3703584 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3703584 /var/tmp/bdevperf.sock 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 3703584 ']' 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:29.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.506 { 00:25:29.506 "params": { 00:25:29.506 "name": "Nvme$subsystem", 00:25:29.506 "trtype": "$TEST_TRANSPORT", 00:25:29.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.506 "adrfam": "ipv4", 00:25:29.506 "trsvcid": "$NVMF_PORT", 00:25:29.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.506 "hdgst": ${hdgst:-false}, 00:25:29.506 "ddgst": ${ddgst:-false} 00:25:29.506 }, 00:25:29.506 "method": "bdev_nvme_attach_controller" 00:25:29.506 } 00:25:29.506 EOF 00:25:29.506 )") 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.506 { 00:25:29.506 "params": { 00:25:29.506 "name": "Nvme$subsystem", 00:25:29.506 "trtype": "$TEST_TRANSPORT", 00:25:29.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.506 "adrfam": "ipv4", 00:25:29.506 "trsvcid": "$NVMF_PORT", 00:25:29.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.506 "hdgst": ${hdgst:-false}, 00:25:29.506 "ddgst": ${ddgst:-false} 00:25:29.506 }, 00:25:29.506 "method": "bdev_nvme_attach_controller" 00:25:29.506 } 00:25:29.506 EOF 00:25:29.506 )") 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.506 { 00:25:29.506 "params": { 00:25:29.506 "name": "Nvme$subsystem", 00:25:29.506 "trtype": "$TEST_TRANSPORT", 00:25:29.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.506 "adrfam": "ipv4", 00:25:29.506 "trsvcid": "$NVMF_PORT", 00:25:29.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.506 "hdgst": ${hdgst:-false}, 00:25:29.506 "ddgst": ${ddgst:-false} 00:25:29.506 }, 00:25:29.506 "method": "bdev_nvme_attach_controller" 00:25:29.506 } 00:25:29.506 EOF 00:25:29.506 )") 00:25:29.506 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.507 { 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme$subsystem", 00:25:29.507 "trtype": "$TEST_TRANSPORT", 00:25:29.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "$NVMF_PORT", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.507 "hdgst": ${hdgst:-false}, 00:25:29.507 "ddgst": ${ddgst:-false} 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 } 00:25:29.507 EOF 00:25:29.507 )") 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.507 { 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme$subsystem", 00:25:29.507 "trtype": "$TEST_TRANSPORT", 00:25:29.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "$NVMF_PORT", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.507 "hdgst": ${hdgst:-false}, 00:25:29.507 "ddgst": ${ddgst:-false} 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 } 00:25:29.507 EOF 00:25:29.507 )") 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.507 { 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme$subsystem", 00:25:29.507 "trtype": "$TEST_TRANSPORT", 00:25:29.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "$NVMF_PORT", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.507 "hdgst": ${hdgst:-false}, 00:25:29.507 "ddgst": ${ddgst:-false} 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 } 00:25:29.507 EOF 00:25:29.507 )") 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.507 [2024-06-10 11:33:58.297008] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:29.507 [2024-06-10 11:33:58.297060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703584 ] 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.507 { 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme$subsystem", 00:25:29.507 "trtype": "$TEST_TRANSPORT", 00:25:29.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "$NVMF_PORT", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.507 "hdgst": ${hdgst:-false}, 00:25:29.507 "ddgst": ${ddgst:-false} 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 } 00:25:29.507 EOF 00:25:29.507 )") 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.507 { 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme$subsystem", 00:25:29.507 "trtype": "$TEST_TRANSPORT", 00:25:29.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "$NVMF_PORT", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.507 "hdgst": ${hdgst:-false}, 00:25:29.507 "ddgst": ${ddgst:-false} 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 } 00:25:29.507 EOF 00:25:29.507 )") 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.507 { 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme$subsystem", 00:25:29.507 "trtype": "$TEST_TRANSPORT", 00:25:29.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "$NVMF_PORT", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.507 "hdgst": ${hdgst:-false}, 00:25:29.507 "ddgst": ${ddgst:-false} 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 } 00:25:29.507 EOF 00:25:29.507 )") 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.507 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.507 { 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme$subsystem", 00:25:29.507 "trtype": "$TEST_TRANSPORT", 00:25:29.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "$NVMF_PORT", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.507 "hdgst": ${hdgst:-false}, 00:25:29.507 "ddgst": ${ddgst:-false} 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 } 00:25:29.507 EOF 00:25:29.507 )") 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:25:29.507 11:33:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme1", 00:25:29.507 "trtype": "rdma", 00:25:29.507 "traddr": "192.168.100.8", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "4420", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.507 "hdgst": false, 00:25:29.507 "ddgst": false 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 },{ 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme2", 00:25:29.507 "trtype": "rdma", 00:25:29.507 "traddr": "192.168.100.8", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "4420", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:29.507 "hdgst": false, 00:25:29.507 "ddgst": false 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 },{ 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme3", 00:25:29.507 "trtype": "rdma", 00:25:29.507 "traddr": "192.168.100.8", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "4420", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:29.507 "hdgst": false, 00:25:29.507 "ddgst": false 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 },{ 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme4", 00:25:29.507 "trtype": "rdma", 00:25:29.507 "traddr": "192.168.100.8", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "4420", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:29.507 "hdgst": false, 00:25:29.507 "ddgst": false 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 },{ 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme5", 00:25:29.507 "trtype": "rdma", 00:25:29.507 "traddr": "192.168.100.8", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "4420", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:29.507 "hdgst": false, 00:25:29.507 "ddgst": false 00:25:29.507 }, 00:25:29.507 "method": "bdev_nvme_attach_controller" 00:25:29.507 },{ 00:25:29.507 "params": { 00:25:29.507 "name": "Nvme6", 00:25:29.507 "trtype": "rdma", 00:25:29.507 "traddr": "192.168.100.8", 00:25:29.507 "adrfam": "ipv4", 00:25:29.507 "trsvcid": "4420", 00:25:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:29.507 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:29.507 "hdgst": false, 00:25:29.507 "ddgst": false 00:25:29.507 }, 00:25:29.508 "method": "bdev_nvme_attach_controller" 00:25:29.508 },{ 00:25:29.508 "params": { 00:25:29.508 "name": "Nvme7", 00:25:29.508 "trtype": "rdma", 00:25:29.508 "traddr": "192.168.100.8", 00:25:29.508 "adrfam": "ipv4", 00:25:29.508 "trsvcid": "4420", 00:25:29.508 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:29.508 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:29.508 "hdgst": false, 00:25:29.508 "ddgst": false 00:25:29.508 }, 00:25:29.508 "method": "bdev_nvme_attach_controller" 00:25:29.508 },{ 00:25:29.508 "params": { 00:25:29.508 "name": "Nvme8", 00:25:29.508 "trtype": "rdma", 00:25:29.508 "traddr": "192.168.100.8", 00:25:29.508 "adrfam": "ipv4", 00:25:29.508 "trsvcid": "4420", 00:25:29.508 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:29.508 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:29.508 "hdgst": false, 00:25:29.508 "ddgst": false 00:25:29.508 }, 00:25:29.508 "method": "bdev_nvme_attach_controller" 00:25:29.508 },{ 00:25:29.508 "params": { 00:25:29.508 "name": "Nvme9", 00:25:29.508 "trtype": "rdma", 00:25:29.508 "traddr": "192.168.100.8", 00:25:29.508 "adrfam": "ipv4", 00:25:29.508 "trsvcid": "4420", 00:25:29.508 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:29.508 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:29.508 "hdgst": false, 00:25:29.508 "ddgst": false 00:25:29.508 }, 00:25:29.508 "method": "bdev_nvme_attach_controller" 00:25:29.508 },{ 00:25:29.508 "params": { 00:25:29.508 "name": "Nvme10", 00:25:29.508 "trtype": "rdma", 00:25:29.508 "traddr": "192.168.100.8", 00:25:29.508 "adrfam": "ipv4", 00:25:29.508 "trsvcid": "4420", 00:25:29.508 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:29.508 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:29.508 "hdgst": false, 00:25:29.508 "ddgst": false 00:25:29.508 }, 00:25:29.508 "method": "bdev_nvme_attach_controller" 00:25:29.508 }' 00:25:29.508 [2024-06-10 11:33:58.359452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.508 [2024-06-10 11:33:58.423785] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.449 Running I/O for 10 seconds... 00:25:30.449 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:30.449 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:25:30.449 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:30.449 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.449 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:30.709 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:31.044 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:31.044 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:31.044 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:31.044 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:31.044 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.044 11:33:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=147 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3703207 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 3703207 ']' 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 3703207 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3703207 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3703207' 00:25:31.305 killing process with pid 3703207 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 3703207 00:25:31.305 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 3703207 00:25:31.565 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:25:31.565 11:34:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:25:32.512 [2024-06-10 11:34:01.256813] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:25:32.512 [2024-06-10 11:34:01.259299] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ec00 was disconnected and freed. reset controller. 00:25:32.512 [2024-06-10 11:34:01.261903] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e980 was disconnected and freed. reset controller. 00:25:32.512 [2024-06-10 11:34:01.264398] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e700 was disconnected and freed. reset controller. 00:25:32.512 [2024-06-10 11:34:01.267011] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e480 was disconnected and freed. reset controller. 00:25:32.512 [2024-06-10 11:34:01.267032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0cfd00 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0bfc80 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a08fb00 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a07fa80 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a06fa00 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a03f880 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x182c00 00:25:32.512 [2024-06-10 11:34:01.267274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f480 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f300 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182b00 00:25:32.512 [2024-06-10 11:34:01.267421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.512 [2024-06-10 11:34:01.267431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182b00 00:25:32.513 [2024-06-10 11:34:01.267439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.513 [2024-06-10 11:34:01.267955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x182f00 00:25:32.513 [2024-06-10 11:34:01.267962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.267971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.267978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.267987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.267994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.268005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.268012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.268022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.268029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.268038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.268045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.268053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.268061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.268070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.268077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.268086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.268093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.268102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x182c00 00:25:32.514 [2024-06-10 11:34:01.268109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:0a90 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270736] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e200 was disconnected and freed. reset controller. 00:25:32.514 [2024-06-10 11:34:01.270753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a43f880 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.270760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a42f800 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.270799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.270816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x182d00 00:25:32.514 [2024-06-10 11:34:01.270832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7f0000 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a79fd80 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.270983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.270993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.271000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.271009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.271016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.271026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.271033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.271042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.271049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.271058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f980 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.271065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.271074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.271081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.271091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.271098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.514 [2024-06-10 11:34:01.271107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183100 00:25:32.514 [2024-06-10 11:34:01.271114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183100 00:25:32.515 [2024-06-10 11:34:01.271342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183800 00:25:32.515 [2024-06-10 11:34:01.271657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.515 [2024-06-10 11:34:01.271666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.271673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.271689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.271706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.271722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.271738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.271754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.271776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a46fa00 len:0x10000 key:0x182d00 00:25:32.516 [2024-06-10 11:34:01.271792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f873000 len:0x10000 key:0x182800 00:25:32.516 [2024-06-10 11:34:01.271808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.271818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f852000 len:0x10000 key:0x182800 00:25:32.516 [2024-06-10 11:34:01.271825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:3350 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274439] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60df80 was disconnected and freed. reset controller. 00:25:32.516 [2024-06-10 11:34:01.274455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aacfd00 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aabfc80 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa8fb00 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa7fa80 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183300 00:25:32.516 [2024-06-10 11:34:01.274682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.274698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.274714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f200 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.274730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.274747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.516 [2024-06-10 11:34:01.274756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183800 00:25:32.516 [2024-06-10 11:34:01.274769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.274990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.274997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.517 [2024-06-10 11:34:01.275207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183000 00:25:32.517 [2024-06-10 11:34:01.275214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183000 00:25:32.518 [2024-06-10 11:34:01.275229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183000 00:25:32.518 [2024-06-10 11:34:01.275246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183000 00:25:32.518 [2024-06-10 11:34:01.275262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183000 00:25:32.518 [2024-06-10 11:34:01.275278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.275294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.275310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.275327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.275344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.275353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.275360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.282158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.282176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.282193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.282210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.282226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.282242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.282258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.282274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.282284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183300 00:25:32.518 [2024-06-10 11:34:01.282296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:bb60 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285202] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60dd00 was disconnected and freed. reset controller. 00:25:32.518 [2024-06-10 11:34:01.285243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183e00 00:25:32.518 [2024-06-10 11:34:01.285254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.518 [2024-06-10 11:34:01.285473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183c00 00:25:32.518 [2024-06-10 11:34:01.285479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183c00 00:25:32.519 [2024-06-10 11:34:01.285785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.285984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.285995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.286002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.286011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.286018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.286027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.286034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.286043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.286050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.519 [2024-06-10 11:34:01.286060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183200 00:25:32.519 [2024-06-10 11:34:01.286067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183200 00:25:32.520 [2024-06-10 11:34:01.286294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.286303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae1f780 len:0x10000 key:0x183e00 00:25:32.520 [2024-06-10 11:34:01.286310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:cc30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288566] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60da80 was disconnected and freed. reset controller. 00:25:32.520 [2024-06-10 11:34:01.288585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010515000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f4000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d3000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010491000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010470000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f9f000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f7e000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f5d000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x182800 00:25:32.520 [2024-06-10 11:34:01.288920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.520 [2024-06-10 11:34:01.288930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.288936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.288946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.288953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.288962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ed9000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.288969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.288978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012eb8000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.288985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.288994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e97000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e76000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012dd1000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012db0000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d311000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f0000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001318e000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001316d000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001314c000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130e9000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130c8000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130a7000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013086000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013065000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013044000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013023000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013002000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fe1000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fc0000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e11e000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0fd000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.521 [2024-06-10 11:34:01.289469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0dc000 len:0x10000 key:0x182800 00:25:32.521 [2024-06-10 11:34:01.289476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bb000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfb3000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.289635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x182800 00:25:32.522 [2024-06-10 11:34:01.289642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:de30 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.292563] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60d800 was disconnected and freed. reset controller. 00:25:32.522 [2024-06-10 11:34:01.292631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.292641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.292649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.292656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.292664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.292671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.292679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.292686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.295196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.522 [2024-06-10 11:34:01.295207] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:32.522 [2024-06-10 11:34:01.295214] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.522 [2024-06-10 11:34:01.295228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.295239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.295247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.295254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.295262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.295268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.295276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.295283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.297738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.522 [2024-06-10 11:34:01.297749] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:32.522 [2024-06-10 11:34:01.297755] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.522 [2024-06-10 11:34:01.297772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.297779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.297787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.297794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.297801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.297808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.297816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.297822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.300154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.522 [2024-06-10 11:34:01.300164] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:32.522 [2024-06-10 11:34:01.300171] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.522 [2024-06-10 11:34:01.300183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.300191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.300198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.300205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.300213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.300220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.300230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.300237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.302701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.522 [2024-06-10 11:34:01.302711] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:32.522 [2024-06-10 11:34:01.302717] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.522 [2024-06-10 11:34:01.302728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.522 [2024-06-10 11:34:01.302735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.522 [2024-06-10 11:34:01.302743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.302750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.302757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.302781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.302789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.302796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.304869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.523 [2024-06-10 11:34:01.304878] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:32.523 [2024-06-10 11:34:01.304884] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.304895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.304903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.304910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.304917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.304924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.304931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.304938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.304945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.307234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.523 [2024-06-10 11:34:01.307244] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:32.523 [2024-06-10 11:34:01.307252] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.307264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.307271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.307279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.307285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.307293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.307300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.307307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.307314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.309574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.523 [2024-06-10 11:34:01.309584] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.523 [2024-06-10 11:34:01.309590] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.309602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.309609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.309617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.309624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.309631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.309638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.309645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.309652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.311887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.523 [2024-06-10 11:34:01.311897] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:32.523 [2024-06-10 11:34:01.311903] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.311914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.311921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.311928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.311938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.311945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.311952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.311959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.311966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.314109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.523 [2024-06-10 11:34:01.314121] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:32.523 [2024-06-10 11:34:01.314127] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.314138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.314145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.314153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.314159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.314167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.314174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.314181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.523 [2024-06-10 11:34:01.314188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:5030 cdw0:0 sqhd:d800 p:1 m:0 dnr:0 00:25:32.523 [2024-06-10 11:34:01.334178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:32.523 [2024-06-10 11:34:01.334191] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:32.523 [2024-06-10 11:34:01.334198] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.345755] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:32.523 [2024-06-10 11:34:01.345786] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:32.523 [2024-06-10 11:34:01.345797] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:32.523 [2024-06-10 11:34:01.345841] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.345855] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.345868] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.345878] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.345889] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.345905] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.345920] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.523 [2024-06-10 11:34:01.346019] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:32.523 [2024-06-10 11:34:01.346030] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:32.523 [2024-06-10 11:34:01.346038] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:32.523 [2024-06-10 11:34:01.346049] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:32.523 [2024-06-10 11:34:01.348702] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:32.523 task offset: 34816 on job bdev=Nvme1n1 fails 00:25:32.523 00:25:32.524 Latency(us) 00:25:32.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.524 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme1n1 ended in about 2.06 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme1n1 : 2.06 124.42 7.78 31.11 0.00 408828.59 22500.69 1069547.52 00:25:32.524 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme2n1 ended in about 2.06 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme2n1 : 2.06 127.27 7.95 31.09 0.00 397774.35 3194.88 1069547.52 00:25:32.524 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme3n1 ended in about 2.06 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme3n1 : 2.06 133.51 8.34 31.07 0.00 379121.61 4969.81 1076538.03 00:25:32.524 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme4n1 ended in about 2.06 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme4n1 : 2.06 127.61 7.98 31.05 0.00 389416.87 11905.71 1076538.03 00:25:32.524 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme5n1 ended in about 2.06 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme5n1 : 2.06 124.14 7.76 31.04 0.00 394216.11 51991.89 1076538.03 00:25:32.524 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme6n1 ended in about 2.06 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme6n1 : 2.06 124.08 7.75 31.02 0.00 390600.02 71215.79 1181395.63 00:25:32.524 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme7n1 ended in about 2.06 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme7n1 : 2.06 124.01 7.75 31.00 0.00 386954.24 14854.83 1160424.11 00:25:32.524 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme8n1 ended in about 2.07 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme8n1 : 2.07 123.94 7.75 30.98 0.00 383313.24 62914.56 1139452.59 00:25:32.524 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme9n1 ended in about 2.07 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme9n1 : 2.07 123.87 7.74 30.97 0.00 379578.03 43690.67 1125471.57 00:25:32.524 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.524 Job: Nvme10n1 ended in about 2.07 seconds with error 00:25:32.524 Verification LBA range: start 0x0 length 0x400 00:25:32.524 Nvme10n1 : 2.07 30.95 1.93 30.95 0.00 940366.51 44564.48 1104500.05 00:25:32.524 =================================================================================================================== 00:25:32.524 Total : 1163.79 72.74 310.28 0.00 413098.61 3194.88 1181395.63 00:25:32.524 [2024-06-10 11:34:01.371304] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:32.524 [2024-06-10 11:34:01.371323] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:32.524 [2024-06-10 11:34:01.371334] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:32.524 [2024-06-10 11:34:01.388011] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.388030] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.388036] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:25:32.524 [2024-06-10 11:34:01.388398] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.388407] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.388413] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:25:32.524 [2024-06-10 11:34:01.388591] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.388599] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.388605] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:25:32.524 [2024-06-10 11:34:01.388914] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.388922] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.388928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd100 00:25:32.524 [2024-06-10 11:34:01.389229] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.389237] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.389243] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:25:32.524 [2024-06-10 11:34:01.389548] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.389555] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.389561] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc100 00:25:32.524 [2024-06-10 11:34:01.389887] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.389895] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.389901] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d1100 00:25:32.524 [2024-06-10 11:34:01.391289] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.391300] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.391306] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928dd80 00:25:32.524 [2024-06-10 11:34:01.391555] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.391566] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.391572] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928ee00 00:25:32.524 [2024-06-10 11:34:01.391722] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.524 [2024-06-10 11:34:01.391730] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.524 [2024-06-10 11:34:01.391735] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928d5c0 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3703584 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:25:32.785 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:32.786 rmmod nvme_rdma 00:25:32.786 rmmod nvme_fabrics 00:25:32.786 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 3703584 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:32.786 00:25:32.786 real 0m5.076s 00:25:32.786 user 0m17.306s 00:25:32.786 sys 0m0.989s 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.786 ************************************ 00:25:32.786 END TEST nvmf_shutdown_tc3 00:25:32.786 ************************************ 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:25:32.786 00:25:32.786 real 0m24.767s 00:25:32.786 user 1m10.217s 00:25:32.786 sys 0m8.466s 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:32.786 11:34:01 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:32.786 ************************************ 00:25:32.786 END TEST nvmf_shutdown 00:25:32.786 ************************************ 00:25:32.786 11:34:01 nvmf_rdma -- nvmf/nvmf.sh@85 -- # timing_exit target 00:25:32.786 11:34:01 nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:32.786 11:34:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:32.786 11:34:01 nvmf_rdma -- nvmf/nvmf.sh@87 -- # timing_enter host 00:25:32.786 11:34:01 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:32.786 11:34:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:32.786 11:34:01 nvmf_rdma -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:25:32.786 11:34:01 nvmf_rdma -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:25:32.786 11:34:01 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:32.786 11:34:01 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:32.786 11:34:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:32.786 ************************************ 00:25:32.786 START TEST nvmf_multicontroller 00:25:32.786 ************************************ 00:25:32.786 11:34:01 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:25:33.046 * Looking for test storage... 00:25:33.046 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:33.046 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:25:33.046 00:25:33.046 real 0m0.133s 00:25:33.046 user 0m0.055s 00:25:33.046 sys 0m0.084s 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:33.046 11:34:01 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.046 ************************************ 00:25:33.046 END TEST nvmf_multicontroller 00:25:33.046 ************************************ 00:25:33.046 11:34:01 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:25:33.046 11:34:01 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:33.046 11:34:01 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:33.046 11:34:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:33.046 ************************************ 00:25:33.046 START TEST nvmf_aer 00:25:33.046 ************************************ 00:25:33.046 11:34:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:25:33.307 * Looking for test storage... 00:25:33.307 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.307 11:34:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:39.894 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:39.894 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:39.894 Found net devices under 0000:98:00.0: mlx_0_0 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:39.894 Found net devices under 0000:98:00.1: mlx_0_1 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:39.894 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:40.156 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:40.156 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:25:40.156 altname enp152s0f0np0 00:25:40.156 altname ens817f0np0 00:25:40.156 inet 192.168.100.8/24 scope global mlx_0_0 00:25:40.156 valid_lft forever preferred_lft forever 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:40.156 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:40.156 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:25:40.156 altname enp152s0f1np1 00:25:40.156 altname ens817f1np1 00:25:40.156 inet 192.168.100.9/24 scope global mlx_0_1 00:25:40.156 valid_lft forever preferred_lft forever 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:40.156 11:34:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:40.156 192.168.100.9' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:40.156 192.168.100.9' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:40.156 192.168.100.9' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3708057 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3708057 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 3708057 ']' 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:40.156 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:40.417 [2024-06-10 11:34:09.147247] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:40.417 [2024-06-10 11:34:09.147314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.417 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.417 [2024-06-10 11:34:09.212572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.417 [2024-06-10 11:34:09.286775] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.417 [2024-06-10 11:34:09.286812] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.417 [2024-06-10 11:34:09.286819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.417 [2024-06-10 11:34:09.286826] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.417 [2024-06-10 11:34:09.286832] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.417 [2024-06-10 11:34:09.286916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.417 [2024-06-10 11:34:09.287049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.417 [2024-06-10 11:34:09.287206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.417 [2024-06-10 11:34:09.287207] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.987 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:40.987 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:25:40.987 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.987 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:40.987 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.248 11:34:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.248 11:34:09 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:41.248 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.248 11:34:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.248 [2024-06-10 11:34:10.012929] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24b50b0/0x24b95a0) succeed. 00:25:41.248 [2024-06-10 11:34:10.027773] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24b66f0/0x24fac30) succeed. 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.248 Malloc0 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.248 [2024-06-10 11:34:10.209109] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.248 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.508 [ 00:25:41.508 { 00:25:41.508 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:41.508 "subtype": "Discovery", 00:25:41.508 "listen_addresses": [], 00:25:41.508 "allow_any_host": true, 00:25:41.508 "hosts": [] 00:25:41.508 }, 00:25:41.508 { 00:25:41.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.508 "subtype": "NVMe", 00:25:41.508 "listen_addresses": [ 00:25:41.508 { 00:25:41.508 "trtype": "RDMA", 00:25:41.508 "adrfam": "IPv4", 00:25:41.508 "traddr": "192.168.100.8", 00:25:41.508 "trsvcid": "4420" 00:25:41.508 } 00:25:41.508 ], 00:25:41.508 "allow_any_host": true, 00:25:41.508 "hosts": [], 00:25:41.508 "serial_number": "SPDK00000000000001", 00:25:41.508 "model_number": "SPDK bdev Controller", 00:25:41.508 "max_namespaces": 2, 00:25:41.508 "min_cntlid": 1, 00:25:41.508 "max_cntlid": 65519, 00:25:41.508 "namespaces": [ 00:25:41.508 { 00:25:41.508 "nsid": 1, 00:25:41.508 "bdev_name": "Malloc0", 00:25:41.508 "name": "Malloc0", 00:25:41.508 "nguid": "C7E2B44BA4CB4CD988AD290AD41CBE5B", 00:25:41.508 "uuid": "c7e2b44b-a4cb-4cd9-88ad-290ad41cbe5b" 00:25:41.508 } 00:25:41.508 ] 00:25:41.508 } 00:25:41.508 ] 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=3708295 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:25:41.508 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.508 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:41.509 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:25:41.509 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:41.509 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.509 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.509 Malloc1 00:25:41.509 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.509 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:41.509 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.509 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.769 [ 00:25:41.769 { 00:25:41.769 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:41.769 "subtype": "Discovery", 00:25:41.769 "listen_addresses": [], 00:25:41.769 "allow_any_host": true, 00:25:41.769 "hosts": [] 00:25:41.769 }, 00:25:41.769 { 00:25:41.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.769 "subtype": "NVMe", 00:25:41.769 "listen_addresses": [ 00:25:41.769 { 00:25:41.769 "trtype": "RDMA", 00:25:41.769 "adrfam": "IPv4", 00:25:41.769 "traddr": "192.168.100.8", 00:25:41.769 "trsvcid": "4420" 00:25:41.769 } 00:25:41.769 ], 00:25:41.769 "allow_any_host": true, 00:25:41.769 "hosts": [], 00:25:41.769 "serial_number": "SPDK00000000000001", 00:25:41.769 "model_number": "SPDK bdev Controller", 00:25:41.769 "max_namespaces": 2, 00:25:41.769 "min_cntlid": 1, 00:25:41.769 "max_cntlid": 65519, 00:25:41.769 "namespaces": [ 00:25:41.769 { 00:25:41.769 "nsid": 1, 00:25:41.769 "bdev_name": "Malloc0", 00:25:41.769 "name": "Malloc0", 00:25:41.769 "nguid": "C7E2B44BA4CB4CD988AD290AD41CBE5B", 00:25:41.769 "uuid": "c7e2b44b-a4cb-4cd9-88ad-290ad41cbe5b" 00:25:41.769 }, 00:25:41.769 { 00:25:41.769 "nsid": 2, 00:25:41.769 "bdev_name": "Malloc1", 00:25:41.769 "name": "Malloc1", 00:25:41.769 "nguid": "4A5DF8E242F74AF5BBA488F86DCB25BE", 00:25:41.769 "uuid": "4a5df8e2-42f7-4af5-bba4-88f86dcb25be" 00:25:41.769 } 00:25:41.769 ] 00:25:41.769 } 00:25:41.769 ] 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 3708295 00:25:41.769 Asynchronous Event Request test 00:25:41.769 Attaching to 192.168.100.8 00:25:41.769 Attached to 192.168.100.8 00:25:41.769 Registering asynchronous event callbacks... 00:25:41.769 Starting namespace attribute notice tests for all controllers... 00:25:41.769 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:41.769 aer_cb - Changed Namespace 00:25:41.769 Cleaning up... 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:41.769 rmmod nvme_rdma 00:25:41.769 rmmod nvme_fabrics 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3708057 ']' 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3708057 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 3708057 ']' 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 3708057 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3708057 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3708057' 00:25:41.769 killing process with pid 3708057 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@968 -- # kill 3708057 00:25:41.769 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@973 -- # wait 3708057 00:25:42.030 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.030 11:34:10 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:42.030 00:25:42.030 real 0m8.985s 00:25:42.030 user 0m8.627s 00:25:42.030 sys 0m5.595s 00:25:42.030 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:42.030 11:34:10 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:42.030 ************************************ 00:25:42.030 END TEST nvmf_aer 00:25:42.030 ************************************ 00:25:42.030 11:34:10 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:25:42.030 11:34:10 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:42.030 11:34:10 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:42.030 11:34:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:42.290 ************************************ 00:25:42.290 START TEST nvmf_async_init 00:25:42.290 ************************************ 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:25:42.290 * Looking for test storage... 00:25:42.290 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.290 11:34:11 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=67aeb456e8704152abfafb02f8e2be4f 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.291 11:34:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:50.429 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:50.430 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:50.430 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:50.430 Found net devices under 0000:98:00.0: mlx_0_0 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:50.430 Found net devices under 0000:98:00.1: mlx_0_1 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:50.430 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:50.430 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:25:50.430 altname enp152s0f0np0 00:25:50.430 altname ens817f0np0 00:25:50.430 inet 192.168.100.8/24 scope global mlx_0_0 00:25:50.430 valid_lft forever preferred_lft forever 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:50.430 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:50.430 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:25:50.430 altname enp152s0f1np1 00:25:50.430 altname ens817f1np1 00:25:50.430 inet 192.168.100.9/24 scope global mlx_0_1 00:25:50.430 valid_lft forever preferred_lft forever 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:50.430 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:50.431 192.168.100.9' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:50.431 192.168.100.9' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:50.431 192.168.100.9' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3712098 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3712098 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 3712098 ']' 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:50.431 11:34:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 [2024-06-10 11:34:18.393097] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:50.431 [2024-06-10 11:34:18.393162] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.431 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.431 [2024-06-10 11:34:18.457720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.431 [2024-06-10 11:34:18.530650] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.431 [2024-06-10 11:34:18.530684] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.431 [2024-06-10 11:34:18.530692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.431 [2024-06-10 11:34:18.530698] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.431 [2024-06-10 11:34:18.530704] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.431 [2024-06-10 11:34:18.530722] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 [2024-06-10 11:34:19.233787] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb46f00/0xb4b3f0) succeed. 00:25:50.431 [2024-06-10 11:34:19.245963] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb48400/0xb8ca80) succeed. 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 null0 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 67aeb456e8704152abfafb02f8e2be4f 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.431 [2024-06-10 11:34:19.353030] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:50.431 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.432 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:50.432 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.432 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.693 nvme0n1 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.693 [ 00:25:50.693 { 00:25:50.693 "name": "nvme0n1", 00:25:50.693 "aliases": [ 00:25:50.693 "67aeb456-e870-4152-abfa-fb02f8e2be4f" 00:25:50.693 ], 00:25:50.693 "product_name": "NVMe disk", 00:25:50.693 "block_size": 512, 00:25:50.693 "num_blocks": 2097152, 00:25:50.693 "uuid": "67aeb456-e870-4152-abfa-fb02f8e2be4f", 00:25:50.693 "assigned_rate_limits": { 00:25:50.693 "rw_ios_per_sec": 0, 00:25:50.693 "rw_mbytes_per_sec": 0, 00:25:50.693 "r_mbytes_per_sec": 0, 00:25:50.693 "w_mbytes_per_sec": 0 00:25:50.693 }, 00:25:50.693 "claimed": false, 00:25:50.693 "zoned": false, 00:25:50.693 "supported_io_types": { 00:25:50.693 "read": true, 00:25:50.693 "write": true, 00:25:50.693 "unmap": false, 00:25:50.693 "write_zeroes": true, 00:25:50.693 "flush": true, 00:25:50.693 "reset": true, 00:25:50.693 "compare": true, 00:25:50.693 "compare_and_write": true, 00:25:50.693 "abort": true, 00:25:50.693 "nvme_admin": true, 00:25:50.693 "nvme_io": true 00:25:50.693 }, 00:25:50.693 "memory_domains": [ 00:25:50.693 { 00:25:50.693 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:50.693 "dma_device_type": 0 00:25:50.693 } 00:25:50.693 ], 00:25:50.693 "driver_specific": { 00:25:50.693 "nvme": [ 00:25:50.693 { 00:25:50.693 "trid": { 00:25:50.693 "trtype": "RDMA", 00:25:50.693 "adrfam": "IPv4", 00:25:50.693 "traddr": "192.168.100.8", 00:25:50.693 "trsvcid": "4420", 00:25:50.693 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:50.693 }, 00:25:50.693 "ctrlr_data": { 00:25:50.693 "cntlid": 1, 00:25:50.693 "vendor_id": "0x8086", 00:25:50.693 "model_number": "SPDK bdev Controller", 00:25:50.693 "serial_number": "00000000000000000000", 00:25:50.693 "firmware_revision": "24.09", 00:25:50.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:50.693 "oacs": { 00:25:50.693 "security": 0, 00:25:50.693 "format": 0, 00:25:50.693 "firmware": 0, 00:25:50.693 "ns_manage": 0 00:25:50.693 }, 00:25:50.693 "multi_ctrlr": true, 00:25:50.693 "ana_reporting": false 00:25:50.693 }, 00:25:50.693 "vs": { 00:25:50.693 "nvme_version": "1.3" 00:25:50.693 }, 00:25:50.693 "ns_data": { 00:25:50.693 "id": 1, 00:25:50.693 "can_share": true 00:25:50.693 } 00:25:50.693 } 00:25:50.693 ], 00:25:50.693 "mp_policy": "active_passive" 00:25:50.693 } 00:25:50.693 } 00:25:50.693 ] 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.693 [2024-06-10 11:34:19.483507] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.693 [2024-06-10 11:34:19.509683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.693 [2024-06-10 11:34:19.535617] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.693 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.693 [ 00:25:50.693 { 00:25:50.693 "name": "nvme0n1", 00:25:50.693 "aliases": [ 00:25:50.693 "67aeb456-e870-4152-abfa-fb02f8e2be4f" 00:25:50.693 ], 00:25:50.693 "product_name": "NVMe disk", 00:25:50.693 "block_size": 512, 00:25:50.693 "num_blocks": 2097152, 00:25:50.693 "uuid": "67aeb456-e870-4152-abfa-fb02f8e2be4f", 00:25:50.693 "assigned_rate_limits": { 00:25:50.693 "rw_ios_per_sec": 0, 00:25:50.693 "rw_mbytes_per_sec": 0, 00:25:50.693 "r_mbytes_per_sec": 0, 00:25:50.693 "w_mbytes_per_sec": 0 00:25:50.693 }, 00:25:50.693 "claimed": false, 00:25:50.693 "zoned": false, 00:25:50.693 "supported_io_types": { 00:25:50.693 "read": true, 00:25:50.693 "write": true, 00:25:50.693 "unmap": false, 00:25:50.693 "write_zeroes": true, 00:25:50.693 "flush": true, 00:25:50.693 "reset": true, 00:25:50.693 "compare": true, 00:25:50.693 "compare_and_write": true, 00:25:50.693 "abort": true, 00:25:50.693 "nvme_admin": true, 00:25:50.693 "nvme_io": true 00:25:50.693 }, 00:25:50.693 "memory_domains": [ 00:25:50.693 { 00:25:50.693 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:50.693 "dma_device_type": 0 00:25:50.693 } 00:25:50.693 ], 00:25:50.693 "driver_specific": { 00:25:50.693 "nvme": [ 00:25:50.693 { 00:25:50.693 "trid": { 00:25:50.693 "trtype": "RDMA", 00:25:50.693 "adrfam": "IPv4", 00:25:50.693 "traddr": "192.168.100.8", 00:25:50.693 "trsvcid": "4420", 00:25:50.693 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:50.693 }, 00:25:50.693 "ctrlr_data": { 00:25:50.693 "cntlid": 2, 00:25:50.693 "vendor_id": "0x8086", 00:25:50.693 "model_number": "SPDK bdev Controller", 00:25:50.693 "serial_number": "00000000000000000000", 00:25:50.693 "firmware_revision": "24.09", 00:25:50.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:50.693 "oacs": { 00:25:50.693 "security": 0, 00:25:50.693 "format": 0, 00:25:50.694 "firmware": 0, 00:25:50.694 "ns_manage": 0 00:25:50.694 }, 00:25:50.694 "multi_ctrlr": true, 00:25:50.694 "ana_reporting": false 00:25:50.694 }, 00:25:50.694 "vs": { 00:25:50.694 "nvme_version": "1.3" 00:25:50.694 }, 00:25:50.694 "ns_data": { 00:25:50.694 "id": 1, 00:25:50.694 "can_share": true 00:25:50.694 } 00:25:50.694 } 00:25:50.694 ], 00:25:50.694 "mp_policy": "active_passive" 00:25:50.694 } 00:25:50.694 } 00:25:50.694 ] 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.E38bP7OdiV 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.E38bP7OdiV 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.694 [2024-06-10 11:34:19.616372] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.E38bP7OdiV 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.E38bP7OdiV 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.694 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.694 [2024-06-10 11:34:19.640414] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:50.955 nvme0n1 00:25:50.955 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.955 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:50.955 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.955 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 [ 00:25:50.955 { 00:25:50.955 "name": "nvme0n1", 00:25:50.955 "aliases": [ 00:25:50.955 "67aeb456-e870-4152-abfa-fb02f8e2be4f" 00:25:50.955 ], 00:25:50.955 "product_name": "NVMe disk", 00:25:50.955 "block_size": 512, 00:25:50.955 "num_blocks": 2097152, 00:25:50.955 "uuid": "67aeb456-e870-4152-abfa-fb02f8e2be4f", 00:25:50.955 "assigned_rate_limits": { 00:25:50.955 "rw_ios_per_sec": 0, 00:25:50.955 "rw_mbytes_per_sec": 0, 00:25:50.955 "r_mbytes_per_sec": 0, 00:25:50.955 "w_mbytes_per_sec": 0 00:25:50.955 }, 00:25:50.955 "claimed": false, 00:25:50.955 "zoned": false, 00:25:50.955 "supported_io_types": { 00:25:50.955 "read": true, 00:25:50.955 "write": true, 00:25:50.955 "unmap": false, 00:25:50.955 "write_zeroes": true, 00:25:50.955 "flush": true, 00:25:50.955 "reset": true, 00:25:50.955 "compare": true, 00:25:50.955 "compare_and_write": true, 00:25:50.955 "abort": true, 00:25:50.955 "nvme_admin": true, 00:25:50.955 "nvme_io": true 00:25:50.955 }, 00:25:50.955 "memory_domains": [ 00:25:50.955 { 00:25:50.955 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:50.955 "dma_device_type": 0 00:25:50.955 } 00:25:50.955 ], 00:25:50.955 "driver_specific": { 00:25:50.955 "nvme": [ 00:25:50.955 { 00:25:50.955 "trid": { 00:25:50.955 "trtype": "RDMA", 00:25:50.955 "adrfam": "IPv4", 00:25:50.955 "traddr": "192.168.100.8", 00:25:50.955 "trsvcid": "4421", 00:25:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:50.955 }, 00:25:50.955 "ctrlr_data": { 00:25:50.955 "cntlid": 3, 00:25:50.955 "vendor_id": "0x8086", 00:25:50.955 "model_number": "SPDK bdev Controller", 00:25:50.955 "serial_number": "00000000000000000000", 00:25:50.955 "firmware_revision": "24.09", 00:25:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:50.955 "oacs": { 00:25:50.955 "security": 0, 00:25:50.955 "format": 0, 00:25:50.955 "firmware": 0, 00:25:50.955 "ns_manage": 0 00:25:50.955 }, 00:25:50.956 "multi_ctrlr": true, 00:25:50.956 "ana_reporting": false 00:25:50.956 }, 00:25:50.956 "vs": { 00:25:50.956 "nvme_version": "1.3" 00:25:50.956 }, 00:25:50.956 "ns_data": { 00:25:50.956 "id": 1, 00:25:50.956 "can_share": true 00:25:50.956 } 00:25:50.956 } 00:25:50.956 ], 00:25:50.956 "mp_policy": "active_passive" 00:25:50.956 } 00:25:50.956 } 00:25:50.956 ] 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.E38bP7OdiV 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:50.956 rmmod nvme_rdma 00:25:50.956 rmmod nvme_fabrics 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3712098 ']' 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3712098 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 3712098 ']' 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 3712098 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3712098 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3712098' 00:25:50.956 killing process with pid 3712098 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 3712098 00:25:50.956 11:34:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 3712098 00:25:51.216 11:34:20 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:51.216 11:34:20 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:51.216 00:25:51.216 real 0m9.056s 00:25:51.216 user 0m3.819s 00:25:51.216 sys 0m5.772s 00:25:51.216 11:34:20 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:51.216 11:34:20 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.216 ************************************ 00:25:51.216 END TEST nvmf_async_init 00:25:51.216 ************************************ 00:25:51.216 11:34:20 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:25:51.216 11:34:20 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:51.216 11:34:20 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:51.216 11:34:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:51.216 ************************************ 00:25:51.216 START TEST dma 00:25:51.216 ************************************ 00:25:51.216 11:34:20 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:25:51.477 * Looking for test storage... 00:25:51.477 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:51.477 11:34:20 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:51.477 11:34:20 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.477 11:34:20 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.477 11:34:20 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.477 11:34:20 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.477 11:34:20 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.477 11:34:20 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.477 11:34:20 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:25:51.477 11:34:20 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.477 11:34:20 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:25:51.477 11:34:20 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:25:51.477 11:34:20 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:25:51.477 11:34:20 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:25:51.477 11:34:20 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.477 11:34:20 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.477 11:34:20 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.477 11:34:20 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.477 11:34:20 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.089 11:34:26 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:58.089 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:58.089 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:58.089 Found net devices under 0000:98:00.0: mlx_0_0 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.089 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:58.089 Found net devices under 0000:98:00.1: mlx_0_1 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:58.090 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:58.358 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:58.358 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:25:58.358 altname enp152s0f0np0 00:25:58.358 altname ens817f0np0 00:25:58.358 inet 192.168.100.8/24 scope global mlx_0_0 00:25:58.358 valid_lft forever preferred_lft forever 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:58.358 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:58.358 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:25:58.358 altname enp152s0f1np1 00:25:58.358 altname ens817f1np1 00:25:58.358 inet 192.168.100.9/24 scope global mlx_0_1 00:25:58.358 valid_lft forever preferred_lft forever 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:58.358 192.168.100.9' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:58.358 192.168.100.9' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:58.358 192.168.100.9' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:58.358 11:34:27 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:58.358 11:34:27 nvmf_rdma.dma -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:58.358 11:34:27 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=3716137 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 3716137 00:25:58.358 11:34:27 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:58.358 11:34:27 nvmf_rdma.dma -- common/autotest_common.sh@830 -- # '[' -z 3716137 ']' 00:25:58.358 11:34:27 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.359 11:34:27 nvmf_rdma.dma -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:58.359 11:34:27 nvmf_rdma.dma -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.359 11:34:27 nvmf_rdma.dma -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:58.359 11:34:27 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:58.359 [2024-06-10 11:34:27.291628] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:58.359 [2024-06-10 11:34:27.291691] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.359 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.619 [2024-06-10 11:34:27.358307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:58.619 [2024-06-10 11:34:27.431136] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.619 [2024-06-10 11:34:27.431175] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.619 [2024-06-10 11:34:27.431185] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.619 [2024-06-10 11:34:27.431191] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.619 [2024-06-10 11:34:27.431197] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.619 [2024-06-10 11:34:27.431341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.619 [2024-06-10 11:34:27.431342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.192 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:59.192 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@863 -- # return 0 00:25:59.192 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.192 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:59.192 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:59.192 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.192 11:34:28 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:59.192 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.192 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:59.192 [2024-06-10 11:34:28.131387] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xebda20/0xec1f10) succeed. 00:25:59.192 [2024-06-10 11:34:28.144824] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xebef20/0xf035a0) succeed. 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.452 11:34:28 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:59.452 Malloc0 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.452 11:34:28 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.452 11:34:28 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:59.452 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.452 11:34:28 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:59.453 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.453 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:25:59.453 [2024-06-10 11:34:28.310319] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:59.453 11:34:28 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.453 11:34:28 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:25:59.453 11:34:28 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:25:59.453 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:25:59.453 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:25:59.453 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:59.453 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:59.453 { 00:25:59.453 "params": { 00:25:59.453 "name": "Nvme$subsystem", 00:25:59.453 "trtype": "$TEST_TRANSPORT", 00:25:59.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.453 "adrfam": "ipv4", 00:25:59.453 "trsvcid": "$NVMF_PORT", 00:25:59.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.453 "hdgst": ${hdgst:-false}, 00:25:59.453 "ddgst": ${ddgst:-false} 00:25:59.453 }, 00:25:59.453 "method": "bdev_nvme_attach_controller" 00:25:59.453 } 00:25:59.453 EOF 00:25:59.453 )") 00:25:59.453 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:25:59.453 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:25:59.453 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:25:59.453 11:34:28 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:59.453 "params": { 00:25:59.453 "name": "Nvme0", 00:25:59.453 "trtype": "rdma", 00:25:59.453 "traddr": "192.168.100.8", 00:25:59.453 "adrfam": "ipv4", 00:25:59.453 "trsvcid": "4420", 00:25:59.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:59.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:59.453 "hdgst": false, 00:25:59.453 "ddgst": false 00:25:59.453 }, 00:25:59.453 "method": "bdev_nvme_attach_controller" 00:25:59.453 }' 00:25:59.453 [2024-06-10 11:34:28.368173] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:59.453 [2024-06-10 11:34:28.368244] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716431 ] 00:25:59.453 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.453 [2024-06-10 11:34:28.419858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:59.713 [2024-06-10 11:34:28.473086] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.713 [2024-06-10 11:34:28.473086] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.997 bdev Nvme0n1 reports 1 memory domains 00:26:04.997 bdev Nvme0n1 supports RDMA memory domain 00:26:04.997 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:04.997 ========================================================================== 00:26:04.997 Latency [us] 00:26:04.997 IOPS MiB/s Average min max 00:26:04.997 Core 2: 23974.67 93.65 666.89 318.48 9529.14 00:26:04.997 Core 3: 26726.11 104.40 598.05 189.30 9679.54 00:26:04.997 ========================================================================== 00:26:04.997 Total : 50700.78 198.05 630.60 189.30 9679.54 00:26:04.997 00:26:04.997 Total operations: 253518, translate 253518 pull_push 0 memzero 0 00:26:04.997 11:34:33 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:26:04.997 11:34:33 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:26:04.997 11:34:33 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:26:04.997 [2024-06-10 11:34:33.836679] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:04.997 [2024-06-10 11:34:33.836737] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717491 ] 00:26:04.997 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.997 [2024-06-10 11:34:33.886870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:04.997 [2024-06-10 11:34:33.938971] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.997 [2024-06-10 11:34:33.939059] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:10.282 bdev Malloc0 reports 2 memory domains 00:26:10.282 bdev Malloc0 doesn't support RDMA memory domain 00:26:10.282 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:10.282 ========================================================================== 00:26:10.282 Latency [us] 00:26:10.282 IOPS MiB/s Average min max 00:26:10.282 Core 2: 18758.37 73.27 852.40 312.95 1388.87 00:26:10.282 Core 3: 18866.33 73.70 847.51 388.44 1436.23 00:26:10.282 ========================================================================== 00:26:10.282 Total : 37624.69 146.97 849.95 312.95 1436.23 00:26:10.282 00:26:10.282 Total operations: 188188, translate 0 pull_push 752752 memzero 0 00:26:10.282 11:34:39 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:26:10.282 11:34:39 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:26:10.282 11:34:39 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:10.282 11:34:39 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:26:10.282 Ignoring -M option 00:26:10.282 [2024-06-10 11:34:39.189018] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:10.283 [2024-06-10 11:34:39.189077] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718496 ] 00:26:10.283 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.283 [2024-06-10 11:34:39.240000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:10.544 [2024-06-10 11:34:39.290554] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:10.544 [2024-06-10 11:34:39.290555] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.834 bdev a75c7145-6399-450c-8320-0eec7caaa9c0 reports 1 memory domains 00:26:15.835 bdev a75c7145-6399-450c-8320-0eec7caaa9c0 supports RDMA memory domain 00:26:15.835 Initialization complete, running randread IO for 5 sec on 2 cores 00:26:15.835 ========================================================================== 00:26:15.835 Latency [us] 00:26:15.835 IOPS MiB/s Average min max 00:26:15.835 Core 2: 121046.53 472.84 131.67 66.42 3366.74 00:26:15.835 Core 3: 127814.40 499.28 124.69 60.69 3442.45 00:26:15.835 ========================================================================== 00:26:15.835 Total : 248860.94 972.11 128.09 60.69 3442.45 00:26:15.835 00:26:15.835 Total operations: 1244402, translate 0 pull_push 0 memzero 1244402 00:26:15.835 11:34:44 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:26:15.835 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.835 [2024-06-10 11:34:44.767351] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:18.378 Initializing NVMe Controllers 00:26:18.378 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:26:18.378 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:18.378 Initialization complete. Launching workers. 00:26:18.378 ======================================================== 00:26:18.378 Latency(us) 00:26:18.378 Device Information : IOPS MiB/s Average min max 00:26:18.378 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.87 7980.71 6987.03 8975.94 00:26:18.378 ======================================================== 00:26:18.378 Total : 2016.00 7.87 7980.71 6987.03 8975.94 00:26:18.378 00:26:18.378 11:34:47 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:26:18.378 11:34:47 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:26:18.378 11:34:47 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:18.378 11:34:47 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:26:18.378 [2024-06-10 11:34:47.139539] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:18.378 [2024-06-10 11:34:47.139586] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719961 ] 00:26:18.378 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.378 [2024-06-10 11:34:47.189437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:18.378 [2024-06-10 11:34:47.241705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.378 [2024-06-10 11:34:47.241705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.665 bdev e684e3ff-699a-41dd-be3b-dccecc8339d9 reports 1 memory domains 00:26:23.665 bdev e684e3ff-699a-41dd-be3b-dccecc8339d9 supports RDMA memory domain 00:26:23.665 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:23.665 ========================================================================== 00:26:23.665 Latency [us] 00:26:23.665 IOPS MiB/s Average min max 00:26:23.665 Core 2: 21366.90 83.46 748.31 10.68 12430.55 00:26:23.665 Core 3: 27629.83 107.93 578.59 8.03 12123.02 00:26:23.665 ========================================================================== 00:26:23.665 Total : 48996.73 191.39 652.60 8.03 12430.55 00:26:23.665 00:26:23.665 Total operations: 245010, translate 244901 pull_push 0 memzero 109 00:26:23.665 11:34:52 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:23.665 11:34:52 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:26:23.665 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.665 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:26:23.665 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:23.665 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:23.665 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:26:23.665 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.665 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:23.665 rmmod nvme_rdma 00:26:23.925 rmmod nvme_fabrics 00:26:23.925 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.925 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:26:23.925 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:26:23.925 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 3716137 ']' 00:26:23.925 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 3716137 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@949 -- # '[' -z 3716137 ']' 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # kill -0 3716137 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # uname 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3716137 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3716137' 00:26:23.925 killing process with pid 3716137 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@968 -- # kill 3716137 00:26:23.925 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@973 -- # wait 3716137 00:26:24.186 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.186 11:34:52 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:24.186 00:26:24.186 real 0m32.831s 00:26:24.186 user 1m35.257s 00:26:24.186 sys 0m6.103s 00:26:24.186 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:24.186 11:34:52 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:26:24.186 ************************************ 00:26:24.186 END TEST dma 00:26:24.186 ************************************ 00:26:24.186 11:34:52 nvmf_rdma -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:26:24.186 11:34:52 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:24.186 11:34:52 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:24.186 11:34:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:24.186 ************************************ 00:26:24.186 START TEST nvmf_identify 00:26:24.186 ************************************ 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:26:24.186 * Looking for test storage... 00:26:24.186 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.186 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:24.448 11:34:53 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.448 11:34:53 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.448 11:34:53 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.448 11:34:53 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.449 11:34:53 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:26:31.084 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:26:31.084 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:31.084 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:26:31.085 Found net devices under 0000:98:00.0: mlx_0_0 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:26:31.085 Found net devices under 0000:98:00.1: mlx_0_1 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:31.085 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:31.085 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:26:31.085 altname enp152s0f0np0 00:26:31.085 altname ens817f0np0 00:26:31.085 inet 192.168.100.8/24 scope global mlx_0_0 00:26:31.085 valid_lft forever preferred_lft forever 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:31.085 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:31.085 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:26:31.085 altname enp152s0f1np1 00:26:31.085 altname ens817f1np1 00:26:31.085 inet 192.168.100.9/24 scope global mlx_0_1 00:26:31.085 valid_lft forever preferred_lft forever 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:31.085 11:34:59 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:31.085 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:31.086 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:31.086 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:31.086 192.168.100.9' 00:26:31.086 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:31.086 192.168.100.9' 00:26:31.086 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:26:31.086 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:31.086 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:31.086 192.168.100.9' 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3724849 00:26:31.347 11:35:00 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.348 11:35:00 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:31.348 11:35:00 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3724849 00:26:31.348 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 3724849 ']' 00:26:31.348 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.348 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:31.348 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.348 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:31.348 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.348 [2024-06-10 11:35:00.150051] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:31.348 [2024-06-10 11:35:00.150122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.348 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.348 [2024-06-10 11:35:00.212037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:31.348 [2024-06-10 11:35:00.277935] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.348 [2024-06-10 11:35:00.277969] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.348 [2024-06-10 11:35:00.277977] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.348 [2024-06-10 11:35:00.277983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.348 [2024-06-10 11:35:00.277989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.348 [2024-06-10 11:35:00.278127] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.348 [2024-06-10 11:35:00.278239] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.348 [2024-06-10 11:35:00.278406] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.348 [2024-06-10 11:35:00.278407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.290 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:32.290 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:26:32.290 11:35:00 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:32.290 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.290 11:35:00 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 [2024-06-10 11:35:00.964499] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x174c0b0/0x17505a0) succeed. 00:26:32.290 [2024-06-10 11:35:00.979021] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x174d6f0/0x1791c30) succeed. 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 Malloc0 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 [2024-06-10 11:35:01.194878] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.290 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 [ 00:26:32.290 { 00:26:32.291 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:32.291 "subtype": "Discovery", 00:26:32.291 "listen_addresses": [ 00:26:32.291 { 00:26:32.291 "trtype": "RDMA", 00:26:32.291 "adrfam": "IPv4", 00:26:32.291 "traddr": "192.168.100.8", 00:26:32.291 "trsvcid": "4420" 00:26:32.291 } 00:26:32.291 ], 00:26:32.291 "allow_any_host": true, 00:26:32.291 "hosts": [] 00:26:32.291 }, 00:26:32.291 { 00:26:32.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.291 "subtype": "NVMe", 00:26:32.291 "listen_addresses": [ 00:26:32.291 { 00:26:32.291 "trtype": "RDMA", 00:26:32.291 "adrfam": "IPv4", 00:26:32.291 "traddr": "192.168.100.8", 00:26:32.291 "trsvcid": "4420" 00:26:32.291 } 00:26:32.291 ], 00:26:32.291 "allow_any_host": true, 00:26:32.291 "hosts": [], 00:26:32.291 "serial_number": "SPDK00000000000001", 00:26:32.291 "model_number": "SPDK bdev Controller", 00:26:32.291 "max_namespaces": 32, 00:26:32.291 "min_cntlid": 1, 00:26:32.291 "max_cntlid": 65519, 00:26:32.291 "namespaces": [ 00:26:32.291 { 00:26:32.291 "nsid": 1, 00:26:32.291 "bdev_name": "Malloc0", 00:26:32.291 "name": "Malloc0", 00:26:32.291 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:32.291 "eui64": "ABCDEF0123456789", 00:26:32.291 "uuid": "b5dfe723-7b57-4ebb-8bca-4adcd6aac3cb" 00:26:32.291 } 00:26:32.291 ] 00:26:32.291 } 00:26:32.291 ] 00:26:32.291 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.291 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:32.291 [2024-06-10 11:35:01.255231] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:32.291 [2024-06-10 11:35:01.255273] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724953 ] 00:26:32.554 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.554 [2024-06-10 11:35:01.311319] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:32.554 [2024-06-10 11:35:01.311401] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:26:32.554 [2024-06-10 11:35:01.311415] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:26:32.554 [2024-06-10 11:35:01.311419] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:26:32.554 [2024-06-10 11:35:01.311446] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:32.555 [2024-06-10 11:35:01.324697] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:26:32.555 [2024-06-10 11:35:01.342282] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:32.555 [2024-06-10 11:35:01.342291] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:26:32.555 [2024-06-10 11:35:01.342299] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342305] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342310] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342315] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342320] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342325] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342330] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342335] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342340] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342345] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342350] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342354] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342360] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342364] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342370] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342374] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342380] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342385] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342389] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342398] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342403] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342408] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342413] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342418] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342423] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342428] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342433] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342438] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342443] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342448] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342453] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342457] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:26:32.555 [2024-06-10 11:35:01.342462] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:32.555 [2024-06-10 11:35:01.342465] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:26:32.555 [2024-06-10 11:35:01.342482] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.342493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183800 00:26:32.555 [2024-06-10 11:35:01.348769] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.555 [2024-06-10 11:35:01.348777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:32.555 [2024-06-10 11:35:01.348784] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.348791] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:32.555 [2024-06-10 11:35:01.348797] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:32.555 [2024-06-10 11:35:01.348802] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:32.555 [2024-06-10 11:35:01.348813] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.348821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.555 [2024-06-10 11:35:01.348848] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.555 [2024-06-10 11:35:01.348853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:26:32.555 [2024-06-10 11:35:01.348859] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:32.555 [2024-06-10 11:35:01.348863] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.348869] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:32.555 [2024-06-10 11:35:01.348877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.348884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.555 [2024-06-10 11:35:01.348909] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.555 [2024-06-10 11:35:01.348914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:26:32.555 [2024-06-10 11:35:01.348919] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:32.555 [2024-06-10 11:35:01.348924] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.348930] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:32.555 [2024-06-10 11:35:01.348937] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.348943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.555 [2024-06-10 11:35:01.348963] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.555 [2024-06-10 11:35:01.348968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:32.555 [2024-06-10 11:35:01.348973] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:32.555 [2024-06-10 11:35:01.348978] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.348985] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.348992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.555 [2024-06-10 11:35:01.349014] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.555 [2024-06-10 11:35:01.349018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:32.555 [2024-06-10 11:35:01.349023] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:32.555 [2024-06-10 11:35:01.349028] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:32.555 [2024-06-10 11:35:01.349033] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.349038] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:32.555 [2024-06-10 11:35:01.349143] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:32.555 [2024-06-10 11:35:01.349148] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:32.555 [2024-06-10 11:35:01.349156] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.349163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.555 [2024-06-10 11:35:01.349186] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.555 [2024-06-10 11:35:01.349190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:32.555 [2024-06-10 11:35:01.349195] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:32.555 [2024-06-10 11:35:01.349204] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.349212] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.555 [2024-06-10 11:35:01.349218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.555 [2024-06-10 11:35:01.349242] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.555 [2024-06-10 11:35:01.349246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349251] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:32.556 [2024-06-10 11:35:01.349256] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:32.556 [2024-06-10 11:35:01.349261] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349266] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:32.556 [2024-06-10 11:35:01.349273] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:32.556 [2024-06-10 11:35:01.349282] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:26:32.556 [2024-06-10 11:35:01.349331] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.556 [2024-06-10 11:35:01.349336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349343] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:32.556 [2024-06-10 11:35:01.349348] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:32.556 [2024-06-10 11:35:01.349352] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:32.556 [2024-06-10 11:35:01.349357] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:32.556 [2024-06-10 11:35:01.349362] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:32.556 [2024-06-10 11:35:01.349367] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:32.556 [2024-06-10 11:35:01.349372] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349380] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:32.556 [2024-06-10 11:35:01.349388] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.556 [2024-06-10 11:35:01.349426] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.556 [2024-06-10 11:35:01.349431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349439] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.556 [2024-06-10 11:35:01.349453] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.556 [2024-06-10 11:35:01.349465] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.556 [2024-06-10 11:35:01.349477] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.556 [2024-06-10 11:35:01.349487] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:32.556 [2024-06-10 11:35:01.349492] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349501] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:32.556 [2024-06-10 11:35:01.349508] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.556 [2024-06-10 11:35:01.349537] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.556 [2024-06-10 11:35:01.349541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349546] nvme_ctrlr.c:2957:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:32.556 [2024-06-10 11:35:01.349551] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:32.556 [2024-06-10 11:35:01.349556] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349564] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:26:32.556 [2024-06-10 11:35:01.349601] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.556 [2024-06-10 11:35:01.349605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349611] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349620] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:32.556 [2024-06-10 11:35:01.349639] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183800 00:26:32.556 [2024-06-10 11:35:01.349654] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.556 [2024-06-10 11:35:01.349680] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.556 [2024-06-10 11:35:01.349685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349695] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183800 00:26:32.556 [2024-06-10 11:35:01.349707] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349712] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.556 [2024-06-10 11:35:01.349717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349722] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349741] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.556 [2024-06-10 11:35:01.349746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349755] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183800 00:26:32.556 [2024-06-10 11:35:01.349770] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:26:32.556 [2024-06-10 11:35:01.349790] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.556 [2024-06-10 11:35:01.349795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:32.556 [2024-06-10 11:35:01.349803] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:26:32.556 ===================================================== 00:26:32.556 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:32.556 ===================================================== 00:26:32.556 Controller Capabilities/Features 00:26:32.556 ================================ 00:26:32.556 Vendor ID: 0000 00:26:32.556 Subsystem Vendor ID: 0000 00:26:32.556 Serial Number: .................... 00:26:32.556 Model Number: ........................................ 00:26:32.556 Firmware Version: 24.09 00:26:32.556 Recommended Arb Burst: 0 00:26:32.556 IEEE OUI Identifier: 00 00 00 00:26:32.556 Multi-path I/O 00:26:32.556 May have multiple subsystem ports: No 00:26:32.556 May have multiple controllers: No 00:26:32.556 Associated with SR-IOV VF: No 00:26:32.556 Max Data Transfer Size: 131072 00:26:32.556 Max Number of Namespaces: 0 00:26:32.556 Max Number of I/O Queues: 1024 00:26:32.556 NVMe Specification Version (VS): 1.3 00:26:32.556 NVMe Specification Version (Identify): 1.3 00:26:32.556 Maximum Queue Entries: 128 00:26:32.556 Contiguous Queues Required: Yes 00:26:32.556 Arbitration Mechanisms Supported 00:26:32.556 Weighted Round Robin: Not Supported 00:26:32.556 Vendor Specific: Not Supported 00:26:32.556 Reset Timeout: 15000 ms 00:26:32.556 Doorbell Stride: 4 bytes 00:26:32.556 NVM Subsystem Reset: Not Supported 00:26:32.556 Command Sets Supported 00:26:32.556 NVM Command Set: Supported 00:26:32.556 Boot Partition: Not Supported 00:26:32.556 Memory Page Size Minimum: 4096 bytes 00:26:32.556 Memory Page Size Maximum: 4096 bytes 00:26:32.556 Persistent Memory Region: Not Supported 00:26:32.556 Optional Asynchronous Events Supported 00:26:32.556 Namespace Attribute Notices: Not Supported 00:26:32.557 Firmware Activation Notices: Not Supported 00:26:32.557 ANA Change Notices: Not Supported 00:26:32.557 PLE Aggregate Log Change Notices: Not Supported 00:26:32.557 LBA Status Info Alert Notices: Not Supported 00:26:32.557 EGE Aggregate Log Change Notices: Not Supported 00:26:32.557 Normal NVM Subsystem Shutdown event: Not Supported 00:26:32.557 Zone Descriptor Change Notices: Not Supported 00:26:32.557 Discovery Log Change Notices: Supported 00:26:32.557 Controller Attributes 00:26:32.557 128-bit Host Identifier: Not Supported 00:26:32.557 Non-Operational Permissive Mode: Not Supported 00:26:32.557 NVM Sets: Not Supported 00:26:32.557 Read Recovery Levels: Not Supported 00:26:32.557 Endurance Groups: Not Supported 00:26:32.557 Predictable Latency Mode: Not Supported 00:26:32.557 Traffic Based Keep ALive: Not Supported 00:26:32.557 Namespace Granularity: Not Supported 00:26:32.557 SQ Associations: Not Supported 00:26:32.557 UUID List: Not Supported 00:26:32.557 Multi-Domain Subsystem: Not Supported 00:26:32.557 Fixed Capacity Management: Not Supported 00:26:32.557 Variable Capacity Management: Not Supported 00:26:32.557 Delete Endurance Group: Not Supported 00:26:32.557 Delete NVM Set: Not Supported 00:26:32.557 Extended LBA Formats Supported: Not Supported 00:26:32.557 Flexible Data Placement Supported: Not Supported 00:26:32.557 00:26:32.557 Controller Memory Buffer Support 00:26:32.557 ================================ 00:26:32.557 Supported: No 00:26:32.557 00:26:32.557 Persistent Memory Region Support 00:26:32.557 ================================ 00:26:32.557 Supported: No 00:26:32.557 00:26:32.557 Admin Command Set Attributes 00:26:32.557 ============================ 00:26:32.557 Security Send/Receive: Not Supported 00:26:32.557 Format NVM: Not Supported 00:26:32.557 Firmware Activate/Download: Not Supported 00:26:32.557 Namespace Management: Not Supported 00:26:32.557 Device Self-Test: Not Supported 00:26:32.557 Directives: Not Supported 00:26:32.557 NVMe-MI: Not Supported 00:26:32.557 Virtualization Management: Not Supported 00:26:32.557 Doorbell Buffer Config: Not Supported 00:26:32.557 Get LBA Status Capability: Not Supported 00:26:32.557 Command & Feature Lockdown Capability: Not Supported 00:26:32.557 Abort Command Limit: 1 00:26:32.557 Async Event Request Limit: 4 00:26:32.557 Number of Firmware Slots: N/A 00:26:32.557 Firmware Slot 1 Read-Only: N/A 00:26:32.557 Firmware Activation Without Reset: N/A 00:26:32.557 Multiple Update Detection Support: N/A 00:26:32.557 Firmware Update Granularity: No Information Provided 00:26:32.557 Per-Namespace SMART Log: No 00:26:32.557 Asymmetric Namespace Access Log Page: Not Supported 00:26:32.557 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:32.557 Command Effects Log Page: Not Supported 00:26:32.557 Get Log Page Extended Data: Supported 00:26:32.557 Telemetry Log Pages: Not Supported 00:26:32.557 Persistent Event Log Pages: Not Supported 00:26:32.557 Supported Log Pages Log Page: May Support 00:26:32.557 Commands Supported & Effects Log Page: Not Supported 00:26:32.557 Feature Identifiers & Effects Log Page:May Support 00:26:32.557 NVMe-MI Commands & Effects Log Page: May Support 00:26:32.557 Data Area 4 for Telemetry Log: Not Supported 00:26:32.557 Error Log Page Entries Supported: 128 00:26:32.557 Keep Alive: Not Supported 00:26:32.557 00:26:32.557 NVM Command Set Attributes 00:26:32.557 ========================== 00:26:32.557 Submission Queue Entry Size 00:26:32.557 Max: 1 00:26:32.557 Min: 1 00:26:32.557 Completion Queue Entry Size 00:26:32.557 Max: 1 00:26:32.557 Min: 1 00:26:32.557 Number of Namespaces: 0 00:26:32.557 Compare Command: Not Supported 00:26:32.557 Write Uncorrectable Command: Not Supported 00:26:32.557 Dataset Management Command: Not Supported 00:26:32.557 Write Zeroes Command: Not Supported 00:26:32.557 Set Features Save Field: Not Supported 00:26:32.557 Reservations: Not Supported 00:26:32.557 Timestamp: Not Supported 00:26:32.557 Copy: Not Supported 00:26:32.557 Volatile Write Cache: Not Present 00:26:32.557 Atomic Write Unit (Normal): 1 00:26:32.557 Atomic Write Unit (PFail): 1 00:26:32.557 Atomic Compare & Write Unit: 1 00:26:32.557 Fused Compare & Write: Supported 00:26:32.557 Scatter-Gather List 00:26:32.557 SGL Command Set: Supported 00:26:32.557 SGL Keyed: Supported 00:26:32.557 SGL Bit Bucket Descriptor: Not Supported 00:26:32.557 SGL Metadata Pointer: Not Supported 00:26:32.557 Oversized SGL: Not Supported 00:26:32.557 SGL Metadata Address: Not Supported 00:26:32.557 SGL Offset: Supported 00:26:32.557 Transport SGL Data Block: Not Supported 00:26:32.557 Replay Protected Memory Block: Not Supported 00:26:32.557 00:26:32.557 Firmware Slot Information 00:26:32.557 ========================= 00:26:32.557 Active slot: 0 00:26:32.557 00:26:32.557 00:26:32.557 Error Log 00:26:32.557 ========= 00:26:32.557 00:26:32.557 Active Namespaces 00:26:32.557 ================= 00:26:32.557 Discovery Log Page 00:26:32.557 ================== 00:26:32.557 Generation Counter: 2 00:26:32.557 Number of Records: 2 00:26:32.557 Record Format: 0 00:26:32.557 00:26:32.557 Discovery Log Entry 0 00:26:32.557 ---------------------- 00:26:32.557 Transport Type: 1 (RDMA) 00:26:32.557 Address Family: 1 (IPv4) 00:26:32.557 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:32.557 Entry Flags: 00:26:32.557 Duplicate Returned Information: 1 00:26:32.557 Explicit Persistent Connection Support for Discovery: 1 00:26:32.557 Transport Requirements: 00:26:32.557 Secure Channel: Not Required 00:26:32.557 Port ID: 0 (0x0000) 00:26:32.557 Controller ID: 65535 (0xffff) 00:26:32.557 Admin Max SQ Size: 128 00:26:32.557 Transport Service Identifier: 4420 00:26:32.557 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:32.557 Transport Address: 192.168.100.8 00:26:32.557 Transport Specific Address Subtype - RDMA 00:26:32.557 RDMA QP Service Type: 1 (Reliable Connected) 00:26:32.557 RDMA Provider Type: 1 (No provider specified) 00:26:32.557 RDMA CM Service: 1 (RDMA_CM) 00:26:32.557 Discovery Log Entry 1 00:26:32.557 ---------------------- 00:26:32.557 Transport Type: 1 (RDMA) 00:26:32.557 Address Family: 1 (IPv4) 00:26:32.557 Subsystem Type: 2 (NVM Subsystem) 00:26:32.557 Entry Flags: 00:26:32.557 Duplicate Returned Information: 0 00:26:32.557 Explicit Persistent Connection Support for Discovery: 0 00:26:32.557 Transport Requirements: 00:26:32.557 Secure Channel: Not Required 00:26:32.557 Port ID: 0 (0x0000) 00:26:32.557 Controller ID: 65535 (0xffff) 00:26:32.557 Admin Max SQ Size: [2024-06-10 11:35:01.349877] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:32.557 [2024-06-10 11:35:01.349886] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 60418 doesn't match qid 00:26:32.557 [2024-06-10 11:35:01.349899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32696 cdw0:5 sqhd:b530 p:0 m:0 dnr:0 00:26:32.557 [2024-06-10 11:35:01.349905] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 60418 doesn't match qid 00:26:32.557 [2024-06-10 11:35:01.349911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32696 cdw0:5 sqhd:b530 p:0 m:0 dnr:0 00:26:32.557 [2024-06-10 11:35:01.349916] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 60418 doesn't match qid 00:26:32.557 [2024-06-10 11:35:01.349922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32696 cdw0:5 sqhd:b530 p:0 m:0 dnr:0 00:26:32.557 [2024-06-10 11:35:01.349927] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 60418 doesn't match qid 00:26:32.557 [2024-06-10 11:35:01.349933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32696 cdw0:5 sqhd:b530 p:0 m:0 dnr:0 00:26:32.557 [2024-06-10 11:35:01.349941] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183800 00:26:32.557 [2024-06-10 11:35:01.349948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.557 [2024-06-10 11:35:01.349970] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.557 [2024-06-10 11:35:01.349976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:26:32.557 [2024-06-10 11:35:01.349983] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.349990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.349995] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350016] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350026] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:32.558 [2024-06-10 11:35:01.350031] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:32.558 [2024-06-10 11:35:01.350036] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350043] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350073] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350083] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350091] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350116] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350126] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350135] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350163] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350173] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350181] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350215] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350225] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350233] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350264] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350274] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350283] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350308] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350318] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350327] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350357] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350367] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350376] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350401] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350410] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350419] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350444] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350454] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350462] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350493] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350504] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350514] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350543] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350553] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350561] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350586] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350596] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350604] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350631] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350641] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350649] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350674] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350684] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350692] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.558 [2024-06-10 11:35:01.350725] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.558 [2024-06-10 11:35:01.350730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:32.558 [2024-06-10 11:35:01.350735] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350743] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.558 [2024-06-10 11:35:01.350750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.350773] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.350778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.350783] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350793] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.350820] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.350825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.350830] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350838] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.350869] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.350873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.350879] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350887] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.350914] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.350918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.350924] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350932] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.350957] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.350961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.350967] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350975] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.350981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351007] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351017] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351026] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351052] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351064] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351072] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351101] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351110] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351119] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351147] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351157] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351165] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351198] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351208] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351216] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351245] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351255] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351263] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351290] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351299] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351308] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351335] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351346] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351354] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351389] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.559 [2024-06-10 11:35:01.351393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:26:32.559 [2024-06-10 11:35:01.351398] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351407] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.559 [2024-06-10 11:35:01.351413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.559 [2024-06-10 11:35:01.351432] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351441] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351450] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351484] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351494] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351502] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351529] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351539] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351547] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351574] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351583] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351592] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351626] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351637] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351646] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351670] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351680] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351689] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351717] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351727] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351735] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351767] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351777] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351785] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351816] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351825] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351834] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351859] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351868] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351902] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351913] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351921] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.351948] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.351953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.351958] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351966] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.351973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.352001] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.352005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.352011] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352019] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.352053] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.352058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.352063] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352071] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.352102] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.352107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.352112] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352120] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.352151] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.352155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.352160] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352169] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.560 [2024-06-10 11:35:01.352195] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.560 [2024-06-10 11:35:01.352200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:32.560 [2024-06-10 11:35:01.352205] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352213] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.560 [2024-06-10 11:35:01.352220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352242] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352252] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352260] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352289] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352299] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352307] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352336] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352346] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352354] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352379] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352389] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352397] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352428] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352438] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352446] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352470] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352480] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352488] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352515] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352525] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352534] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352564] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352574] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352582] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352609] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352619] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352627] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352658] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352667] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352676] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352706] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.352716] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352724] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.352732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.352753] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.352757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.356768] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.356779] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.356786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.561 [2024-06-10 11:35:01.356806] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.561 [2024-06-10 11:35:01.356811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:26:32.561 [2024-06-10 11:35:01.356816] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:26:32.561 [2024-06-10 11:35:01.356822] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:26:32.561 128 00:26:32.561 Transport Service Identifier: 4420 00:26:32.561 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:32.561 Transport Address: 192.168.100.8 00:26:32.561 Transport Specific Address Subtype - RDMA 00:26:32.561 RDMA QP Service Type: 1 (Reliable Connected) 00:26:32.561 RDMA Provider Type: 1 (No provider specified) 00:26:32.561 RDMA CM Service: 1 (RDMA_CM) 00:26:32.561 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:32.561 [2024-06-10 11:35:01.439178] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:32.561 [2024-06-10 11:35:01.439256] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725046 ] 00:26:32.561 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.561 [2024-06-10 11:35:01.498178] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:32.561 [2024-06-10 11:35:01.498260] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:26:32.561 [2024-06-10 11:35:01.498275] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:26:32.561 [2024-06-10 11:35:01.498279] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:26:32.561 [2024-06-10 11:35:01.498303] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:32.561 [2024-06-10 11:35:01.508708] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:26:32.826 [2024-06-10 11:35:01.529973] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:32.826 [2024-06-10 11:35:01.529983] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:26:32.826 [2024-06-10 11:35:01.529990] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.529996] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530001] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530010] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530015] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530020] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530025] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530030] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530035] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530040] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530045] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530050] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530055] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530059] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530064] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530069] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530074] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530079] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530084] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530089] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530094] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530099] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530104] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530109] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530114] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530119] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530124] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530129] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530134] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530139] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530144] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530148] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:26:32.826 [2024-06-10 11:35:01.530153] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:32.826 [2024-06-10 11:35:01.530156] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:26:32.826 [2024-06-10 11:35:01.530171] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.826 [2024-06-10 11:35:01.530183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183800 00:26:32.827 [2024-06-10 11:35:01.536768] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.536777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.536783] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.536790] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:32.827 [2024-06-10 11:35:01.536796] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:32.827 [2024-06-10 11:35:01.536801] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:32.827 [2024-06-10 11:35:01.536811] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.536819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.827 [2024-06-10 11:35:01.536840] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.536845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.536850] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:32.827 [2024-06-10 11:35:01.536855] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.536861] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:32.827 [2024-06-10 11:35:01.536868] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.536875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.827 [2024-06-10 11:35:01.536887] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.536893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.536898] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:32.827 [2024-06-10 11:35:01.536903] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.536910] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:32.827 [2024-06-10 11:35:01.536916] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.536923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.827 [2024-06-10 11:35:01.536938] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.536942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.536948] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:32.827 [2024-06-10 11:35:01.536953] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.536960] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.536969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.827 [2024-06-10 11:35:01.536982] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.536987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.536992] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:32.827 [2024-06-10 11:35:01.536996] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:32.827 [2024-06-10 11:35:01.537001] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537007] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:32.827 [2024-06-10 11:35:01.537112] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:32.827 [2024-06-10 11:35:01.537116] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:32.827 [2024-06-10 11:35:01.537124] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.827 [2024-06-10 11:35:01.537146] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.537150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.537155] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:32.827 [2024-06-10 11:35:01.537160] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537168] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.827 [2024-06-10 11:35:01.537192] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.537196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.537201] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:32.827 [2024-06-10 11:35:01.537206] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:32.827 [2024-06-10 11:35:01.537210] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537216] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:32.827 [2024-06-10 11:35:01.537229] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:32.827 [2024-06-10 11:35:01.537237] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:26:32.827 [2024-06-10 11:35:01.537275] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.537280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.537288] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:32.827 [2024-06-10 11:35:01.537293] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:32.827 [2024-06-10 11:35:01.537297] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:32.827 [2024-06-10 11:35:01.537302] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:32.827 [2024-06-10 11:35:01.537306] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:32.827 [2024-06-10 11:35:01.537311] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:32.827 [2024-06-10 11:35:01.537316] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537324] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:32.827 [2024-06-10 11:35:01.537332] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.827 [2024-06-10 11:35:01.537354] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.537359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.537366] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.827 [2024-06-10 11:35:01.537379] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.827 [2024-06-10 11:35:01.537390] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.827 [2024-06-10 11:35:01.537402] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.827 [2024-06-10 11:35:01.537412] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:32.827 [2024-06-10 11:35:01.537417] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537426] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:32.827 [2024-06-10 11:35:01.537433] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.827 [2024-06-10 11:35:01.537440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.827 [2024-06-10 11:35:01.537452] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.827 [2024-06-10 11:35:01.537457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:26:32.827 [2024-06-10 11:35:01.537464] nvme_ctrlr.c:2957:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:32.828 [2024-06-10 11:35:01.537469] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537473] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537480] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537487] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537494] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.828 [2024-06-10 11:35:01.537523] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.537528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.537582] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537587] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537594] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537602] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183800 00:26:32.828 [2024-06-10 11:35:01.537628] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.537632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.537646] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:32.828 [2024-06-10 11:35:01.537654] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537659] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537666] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537674] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:26:32.828 [2024-06-10 11:35:01.537703] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.537708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.537716] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537721] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537728] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537738] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:26:32.828 [2024-06-10 11:35:01.537771] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.537776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.537784] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537789] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537795] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537803] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537808] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537813] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537818] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:32.828 [2024-06-10 11:35:01.537823] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:32.828 [2024-06-10 11:35:01.537828] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:32.828 [2024-06-10 11:35:01.537841] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.828 [2024-06-10 11:35:01.537855] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.828 [2024-06-10 11:35:01.537871] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.537876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.537882] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537887] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.537891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.537896] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537904] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.828 [2024-06-10 11:35:01.537924] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.537929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.537935] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537943] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.828 [2024-06-10 11:35:01.537968] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.537972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.537978] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537985] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.537992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.828 [2024-06-10 11:35:01.538008] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.538012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.538018] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.538029] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.538036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183800 00:26:32.828 [2024-06-10 11:35:01.538044] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.538051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183800 00:26:32.828 [2024-06-10 11:35:01.538059] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.538065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183800 00:26:32.828 [2024-06-10 11:35:01.538073] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.538079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183800 00:26:32.828 [2024-06-10 11:35:01.538087] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.538091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.538101] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.538106] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.538111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.538120] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.538125] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.538130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:32.828 [2024-06-10 11:35:01.538136] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:26:32.828 [2024-06-10 11:35:01.538142] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.828 [2024-06-10 11:35:01.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:32.829 [2024-06-10 11:35:01.538154] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:26:32.829 ===================================================== 00:26:32.829 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.829 ===================================================== 00:26:32.829 Controller Capabilities/Features 00:26:32.829 ================================ 00:26:32.829 Vendor ID: 8086 00:26:32.829 Subsystem Vendor ID: 8086 00:26:32.829 Serial Number: SPDK00000000000001 00:26:32.829 Model Number: SPDK bdev Controller 00:26:32.829 Firmware Version: 24.09 00:26:32.829 Recommended Arb Burst: 6 00:26:32.829 IEEE OUI Identifier: e4 d2 5c 00:26:32.829 Multi-path I/O 00:26:32.829 May have multiple subsystem ports: Yes 00:26:32.829 May have multiple controllers: Yes 00:26:32.829 Associated with SR-IOV VF: No 00:26:32.829 Max Data Transfer Size: 131072 00:26:32.829 Max Number of Namespaces: 32 00:26:32.829 Max Number of I/O Queues: 127 00:26:32.829 NVMe Specification Version (VS): 1.3 00:26:32.829 NVMe Specification Version (Identify): 1.3 00:26:32.829 Maximum Queue Entries: 128 00:26:32.829 Contiguous Queues Required: Yes 00:26:32.829 Arbitration Mechanisms Supported 00:26:32.829 Weighted Round Robin: Not Supported 00:26:32.829 Vendor Specific: Not Supported 00:26:32.829 Reset Timeout: 15000 ms 00:26:32.829 Doorbell Stride: 4 bytes 00:26:32.829 NVM Subsystem Reset: Not Supported 00:26:32.829 Command Sets Supported 00:26:32.829 NVM Command Set: Supported 00:26:32.829 Boot Partition: Not Supported 00:26:32.829 Memory Page Size Minimum: 4096 bytes 00:26:32.829 Memory Page Size Maximum: 4096 bytes 00:26:32.829 Persistent Memory Region: Not Supported 00:26:32.829 Optional Asynchronous Events Supported 00:26:32.829 Namespace Attribute Notices: Supported 00:26:32.829 Firmware Activation Notices: Not Supported 00:26:32.829 ANA Change Notices: Not Supported 00:26:32.829 PLE Aggregate Log Change Notices: Not Supported 00:26:32.829 LBA Status Info Alert Notices: Not Supported 00:26:32.829 EGE Aggregate Log Change Notices: Not Supported 00:26:32.829 Normal NVM Subsystem Shutdown event: Not Supported 00:26:32.829 Zone Descriptor Change Notices: Not Supported 00:26:32.829 Discovery Log Change Notices: Not Supported 00:26:32.829 Controller Attributes 00:26:32.829 128-bit Host Identifier: Supported 00:26:32.829 Non-Operational Permissive Mode: Not Supported 00:26:32.829 NVM Sets: Not Supported 00:26:32.829 Read Recovery Levels: Not Supported 00:26:32.829 Endurance Groups: Not Supported 00:26:32.829 Predictable Latency Mode: Not Supported 00:26:32.829 Traffic Based Keep ALive: Not Supported 00:26:32.829 Namespace Granularity: Not Supported 00:26:32.829 SQ Associations: Not Supported 00:26:32.829 UUID List: Not Supported 00:26:32.829 Multi-Domain Subsystem: Not Supported 00:26:32.829 Fixed Capacity Management: Not Supported 00:26:32.829 Variable Capacity Management: Not Supported 00:26:32.829 Delete Endurance Group: Not Supported 00:26:32.829 Delete NVM Set: Not Supported 00:26:32.829 Extended LBA Formats Supported: Not Supported 00:26:32.829 Flexible Data Placement Supported: Not Supported 00:26:32.829 00:26:32.829 Controller Memory Buffer Support 00:26:32.829 ================================ 00:26:32.829 Supported: No 00:26:32.829 00:26:32.829 Persistent Memory Region Support 00:26:32.829 ================================ 00:26:32.829 Supported: No 00:26:32.829 00:26:32.829 Admin Command Set Attributes 00:26:32.829 ============================ 00:26:32.829 Security Send/Receive: Not Supported 00:26:32.829 Format NVM: Not Supported 00:26:32.829 Firmware Activate/Download: Not Supported 00:26:32.829 Namespace Management: Not Supported 00:26:32.829 Device Self-Test: Not Supported 00:26:32.829 Directives: Not Supported 00:26:32.829 NVMe-MI: Not Supported 00:26:32.829 Virtualization Management: Not Supported 00:26:32.829 Doorbell Buffer Config: Not Supported 00:26:32.829 Get LBA Status Capability: Not Supported 00:26:32.829 Command & Feature Lockdown Capability: Not Supported 00:26:32.829 Abort Command Limit: 4 00:26:32.829 Async Event Request Limit: 4 00:26:32.829 Number of Firmware Slots: N/A 00:26:32.829 Firmware Slot 1 Read-Only: N/A 00:26:32.829 Firmware Activation Without Reset: N/A 00:26:32.829 Multiple Update Detection Support: N/A 00:26:32.829 Firmware Update Granularity: No Information Provided 00:26:32.829 Per-Namespace SMART Log: No 00:26:32.829 Asymmetric Namespace Access Log Page: Not Supported 00:26:32.829 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:32.829 Command Effects Log Page: Supported 00:26:32.829 Get Log Page Extended Data: Supported 00:26:32.829 Telemetry Log Pages: Not Supported 00:26:32.829 Persistent Event Log Pages: Not Supported 00:26:32.829 Supported Log Pages Log Page: May Support 00:26:32.829 Commands Supported & Effects Log Page: Not Supported 00:26:32.829 Feature Identifiers & Effects Log Page:May Support 00:26:32.829 NVMe-MI Commands & Effects Log Page: May Support 00:26:32.829 Data Area 4 for Telemetry Log: Not Supported 00:26:32.829 Error Log Page Entries Supported: 128 00:26:32.829 Keep Alive: Supported 00:26:32.829 Keep Alive Granularity: 10000 ms 00:26:32.829 00:26:32.829 NVM Command Set Attributes 00:26:32.829 ========================== 00:26:32.829 Submission Queue Entry Size 00:26:32.829 Max: 64 00:26:32.829 Min: 64 00:26:32.829 Completion Queue Entry Size 00:26:32.829 Max: 16 00:26:32.829 Min: 16 00:26:32.829 Number of Namespaces: 32 00:26:32.829 Compare Command: Supported 00:26:32.829 Write Uncorrectable Command: Not Supported 00:26:32.829 Dataset Management Command: Supported 00:26:32.829 Write Zeroes Command: Supported 00:26:32.829 Set Features Save Field: Not Supported 00:26:32.829 Reservations: Supported 00:26:32.829 Timestamp: Not Supported 00:26:32.829 Copy: Supported 00:26:32.829 Volatile Write Cache: Present 00:26:32.829 Atomic Write Unit (Normal): 1 00:26:32.829 Atomic Write Unit (PFail): 1 00:26:32.829 Atomic Compare & Write Unit: 1 00:26:32.829 Fused Compare & Write: Supported 00:26:32.829 Scatter-Gather List 00:26:32.829 SGL Command Set: Supported 00:26:32.829 SGL Keyed: Supported 00:26:32.829 SGL Bit Bucket Descriptor: Not Supported 00:26:32.829 SGL Metadata Pointer: Not Supported 00:26:32.829 Oversized SGL: Not Supported 00:26:32.829 SGL Metadata Address: Not Supported 00:26:32.829 SGL Offset: Supported 00:26:32.829 Transport SGL Data Block: Not Supported 00:26:32.829 Replay Protected Memory Block: Not Supported 00:26:32.829 00:26:32.829 Firmware Slot Information 00:26:32.829 ========================= 00:26:32.829 Active slot: 1 00:26:32.829 Slot 1 Firmware Revision: 24.09 00:26:32.829 00:26:32.829 00:26:32.829 Commands Supported and Effects 00:26:32.829 ============================== 00:26:32.829 Admin Commands 00:26:32.829 -------------- 00:26:32.829 Get Log Page (02h): Supported 00:26:32.829 Identify (06h): Supported 00:26:32.829 Abort (08h): Supported 00:26:32.829 Set Features (09h): Supported 00:26:32.829 Get Features (0Ah): Supported 00:26:32.829 Asynchronous Event Request (0Ch): Supported 00:26:32.829 Keep Alive (18h): Supported 00:26:32.829 I/O Commands 00:26:32.829 ------------ 00:26:32.829 Flush (00h): Supported LBA-Change 00:26:32.829 Write (01h): Supported LBA-Change 00:26:32.829 Read (02h): Supported 00:26:32.829 Compare (05h): Supported 00:26:32.829 Write Zeroes (08h): Supported LBA-Change 00:26:32.829 Dataset Management (09h): Supported LBA-Change 00:26:32.829 Copy (19h): Supported LBA-Change 00:26:32.829 Unknown (79h): Supported LBA-Change 00:26:32.829 Unknown (7Ah): Supported 00:26:32.829 00:26:32.829 Error Log 00:26:32.829 ========= 00:26:32.829 00:26:32.829 Arbitration 00:26:32.829 =========== 00:26:32.829 Arbitration Burst: 1 00:26:32.829 00:26:32.829 Power Management 00:26:32.829 ================ 00:26:32.829 Number of Power States: 1 00:26:32.829 Current Power State: Power State #0 00:26:32.829 Power State #0: 00:26:32.829 Max Power: 0.00 W 00:26:32.829 Non-Operational State: Operational 00:26:32.829 Entry Latency: Not Reported 00:26:32.829 Exit Latency: Not Reported 00:26:32.829 Relative Read Throughput: 0 00:26:32.829 Relative Read Latency: 0 00:26:32.829 Relative Write Throughput: 0 00:26:32.829 Relative Write Latency: 0 00:26:32.829 Idle Power: Not Reported 00:26:32.829 Active Power: Not Reported 00:26:32.829 Non-Operational Permissive Mode: Not Supported 00:26:32.829 00:26:32.829 Health Information 00:26:32.829 ================== 00:26:32.829 Critical Warnings: 00:26:32.829 Available Spare Space: OK 00:26:32.829 Temperature: OK 00:26:32.829 Device Reliability: OK 00:26:32.829 Read Only: No 00:26:32.830 Volatile Memory Backup: OK 00:26:32.830 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:32.830 Temperature Threshold: [2024-06-10 11:35:01.538249] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538271] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538281] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538307] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:32.830 [2024-06-10 11:35:01.538315] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51983 doesn't match qid 00:26:32.830 [2024-06-10 11:35:01.538329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32642 cdw0:5 sqhd:5530 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538335] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51983 doesn't match qid 00:26:32.830 [2024-06-10 11:35:01.538341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32642 cdw0:5 sqhd:5530 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538346] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51983 doesn't match qid 00:26:32.830 [2024-06-10 11:35:01.538352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32642 cdw0:5 sqhd:5530 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538357] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51983 doesn't match qid 00:26:32.830 [2024-06-10 11:35:01.538364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32642 cdw0:5 sqhd:5530 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538372] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538392] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538404] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538416] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538432] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538441] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:32.830 [2024-06-10 11:35:01.538446] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:32.830 [2024-06-10 11:35:01.538451] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538460] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538484] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538494] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538502] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538526] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538536] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538545] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538567] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538577] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538586] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538605] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538616] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538624] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538648] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538658] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538667] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538689] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538700] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538710] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538733] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538743] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538751] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538774] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538785] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538793] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538812] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538822] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538831] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538852] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538862] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538871] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.830 [2024-06-10 11:35:01.538892] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.830 [2024-06-10 11:35:01.538897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:32.830 [2024-06-10 11:35:01.538902] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538910] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.830 [2024-06-10 11:35:01.538917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.538930] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.538934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.538941] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.538950] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.538956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.538969] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.538974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.538979] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.538988] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.538995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539011] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539022] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539030] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539052] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539061] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539070] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539091] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539101] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539109] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539131] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539140] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539149] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539174] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539185] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539194] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539215] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539225] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539233] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539257] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539266] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539275] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539296] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539306] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539314] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539338] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539347] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539356] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539379] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539389] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539397] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539417] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539428] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539436] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539458] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539468] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539476] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.831 [2024-06-10 11:35:01.539483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.831 [2024-06-10 11:35:01.539495] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.831 [2024-06-10 11:35:01.539500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:32.831 [2024-06-10 11:35:01.539505] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539513] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539539] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539548] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539557] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539578] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539588] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539596] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539618] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539627] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539636] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539657] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539668] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539677] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539696] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539706] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539714] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539739] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539749] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539757] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539783] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539793] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539801] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539821] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539831] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539839] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539860] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539870] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539878] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539901] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539911] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539919] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539943] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539953] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539961] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.539968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.539986] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.539991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.539996] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540005] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.540024] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.540029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.540034] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540042] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.540064] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.540068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.540074] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540082] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.540103] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.540108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.540113] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540121] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.540144] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.540148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.540154] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540162] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.832 [2024-06-10 11:35:01.540169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.832 [2024-06-10 11:35:01.540185] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.832 [2024-06-10 11:35:01.540190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:32.832 [2024-06-10 11:35:01.540195] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540203] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540227] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540236] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540245] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540265] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540274] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540283] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540306] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540315] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540324] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540349] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540359] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540367] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540388] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540398] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540406] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540428] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540438] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540446] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540467] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540477] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540485] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540505] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540515] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540523] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540546] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540556] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540564] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540586] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540595] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540604] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540626] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540636] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540644] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540666] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540676] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540684] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540707] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540717] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540725] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.540732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.540749] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.540753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.540758] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.544774] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.544782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:32.833 [2024-06-10 11:35:01.544795] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:32.833 [2024-06-10 11:35:01.544800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0014 p:0 m:0 dnr:0 00:26:32.833 [2024-06-10 11:35:01.544805] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:26:32.833 [2024-06-10 11:35:01.544811] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:26:32.833 0 Kelvin (-273 Celsius) 00:26:32.833 Available Spare: 0% 00:26:32.833 Available Spare Threshold: 0% 00:26:32.833 Life Percentage Used: 0% 00:26:32.833 Data Units Read: 0 00:26:32.833 Data Units Written: 0 00:26:32.833 Host Read Commands: 0 00:26:32.833 Host Write Commands: 0 00:26:32.833 Controller Busy Time: 0 minutes 00:26:32.833 Power Cycles: 0 00:26:32.833 Power On Hours: 0 hours 00:26:32.833 Unsafe Shutdowns: 0 00:26:32.833 Unrecoverable Media Errors: 0 00:26:32.833 Lifetime Error Log Entries: 0 00:26:32.833 Warning Temperature Time: 0 minutes 00:26:32.833 Critical Temperature Time: 0 minutes 00:26:32.833 00:26:32.833 Number of Queues 00:26:32.833 ================ 00:26:32.833 Number of I/O Submission Queues: 127 00:26:32.833 Number of I/O Completion Queues: 127 00:26:32.833 00:26:32.833 Active Namespaces 00:26:32.833 ================= 00:26:32.833 Namespace ID:1 00:26:32.833 Error Recovery Timeout: Unlimited 00:26:32.833 Command Set Identifier: NVM (00h) 00:26:32.833 Deallocate: Supported 00:26:32.833 Deallocated/Unwritten Error: Not Supported 00:26:32.833 Deallocated Read Value: Unknown 00:26:32.833 Deallocate in Write Zeroes: Not Supported 00:26:32.833 Deallocated Guard Field: 0xFFFF 00:26:32.833 Flush: Supported 00:26:32.834 Reservation: Supported 00:26:32.834 Namespace Sharing Capabilities: Multiple Controllers 00:26:32.834 Size (in LBAs): 131072 (0GiB) 00:26:32.834 Capacity (in LBAs): 131072 (0GiB) 00:26:32.834 Utilization (in LBAs): 131072 (0GiB) 00:26:32.834 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:32.834 EUI64: ABCDEF0123456789 00:26:32.834 UUID: b5dfe723-7b57-4ebb-8bca-4adcd6aac3cb 00:26:32.834 Thin Provisioning: Not Supported 00:26:32.834 Per-NS Atomic Units: Yes 00:26:32.834 Atomic Boundary Size (Normal): 0 00:26:32.834 Atomic Boundary Size (PFail): 0 00:26:32.834 Atomic Boundary Offset: 0 00:26:32.834 Maximum Single Source Range Length: 65535 00:26:32.834 Maximum Copy Length: 65535 00:26:32.834 Maximum Source Range Count: 1 00:26:32.834 NGUID/EUI64 Never Reused: No 00:26:32.834 Namespace Write Protected: No 00:26:32.834 Number of LBA Formats: 1 00:26:32.834 Current LBA Format: LBA Format #00 00:26:32.834 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:32.834 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:32.834 rmmod nvme_rdma 00:26:32.834 rmmod nvme_fabrics 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3724849 ']' 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3724849 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 3724849 ']' 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 3724849 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3724849 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3724849' 00:26:32.834 killing process with pid 3724849 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@968 -- # kill 3724849 00:26:32.834 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@973 -- # wait 3724849 00:26:33.095 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:33.095 11:35:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:33.095 00:26:33.095 real 0m8.902s 00:26:33.095 user 0m8.681s 00:26:33.095 sys 0m5.505s 00:26:33.095 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:33.095 11:35:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 ************************************ 00:26:33.095 END TEST nvmf_identify 00:26:33.095 ************************************ 00:26:33.095 11:35:01 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:26:33.095 11:35:01 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:33.095 11:35:01 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:33.095 11:35:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 ************************************ 00:26:33.095 START TEST nvmf_perf 00:26:33.095 ************************************ 00:26:33.095 11:35:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:26:33.357 * Looking for test storage... 00:26:33.357 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.357 11:35:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:41.503 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.503 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.503 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.503 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.503 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.503 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.503 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:26:41.504 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:26:41.504 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:26:41.504 Found net devices under 0000:98:00.0: mlx_0_0 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:26:41.504 Found net devices under 0000:98:00.1: mlx_0_1 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:41.504 11:35:08 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:41.504 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:41.504 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:26:41.504 altname enp152s0f0np0 00:26:41.504 altname ens817f0np0 00:26:41.504 inet 192.168.100.8/24 scope global mlx_0_0 00:26:41.504 valid_lft forever preferred_lft forever 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:41.504 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:41.504 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:26:41.504 altname enp152s0f1np1 00:26:41.504 altname ens817f1np1 00:26:41.504 inet 192.168.100.9/24 scope global mlx_0_1 00:26:41.504 valid_lft forever preferred_lft forever 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.504 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:41.505 192.168.100.9' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:41.505 192.168.100.9' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:41.505 192.168.100.9' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3728889 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3728889 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 3728889 ']' 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:41.505 11:35:09 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:41.505 [2024-06-10 11:35:09.252216] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:41.505 [2024-06-10 11:35:09.252292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.505 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.505 [2024-06-10 11:35:09.318767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.505 [2024-06-10 11:35:09.393294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.505 [2024-06-10 11:35:09.393331] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.505 [2024-06-10 11:35:09.393339] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.505 [2024-06-10 11:35:09.393346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.505 [2024-06-10 11:35:09.393355] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.505 [2024-06-10 11:35:09.393493] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.505 [2024-06-10 11:35:09.393616] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.505 [2024-06-10 11:35:09.393794] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.505 [2024-06-10 11:35:09.393812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.505 11:35:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:41.505 11:35:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:26:41.505 11:35:10 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.505 11:35:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:41.505 11:35:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:41.505 11:35:10 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.505 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:41.505 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:41.767 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:41.767 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:41.767 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:41.767 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:42.027 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:42.027 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:42.027 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:42.027 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:26:42.027 11:35:10 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:26:42.287 [2024-06-10 11:35:11.046874] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:26:42.287 [2024-06-10 11:35:11.076583] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x124b1b0/0x13791c0) succeed. 00:26:42.287 [2024-06-10 11:35:11.091222] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x124c7f0/0x1259040) succeed. 00:26:42.288 11:35:11 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:42.548 11:35:11 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:42.548 11:35:11 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:42.809 11:35:11 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:42.809 11:35:11 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:42.809 11:35:11 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:43.070 [2024-06-10 11:35:11.868716] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:43.070 11:35:11 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:43.337 11:35:12 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:43.337 11:35:12 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:43.337 11:35:12 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:43.337 11:35:12 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:44.723 Initializing NVMe Controllers 00:26:44.723 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:44.723 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:44.723 Initialization complete. Launching workers. 00:26:44.723 ======================================================== 00:26:44.723 Latency(us) 00:26:44.723 Device Information : IOPS MiB/s Average min max 00:26:44.723 PCIE (0000:65:00.0) NSID 1 from core 0: 79602.00 310.95 401.44 13.30 5398.55 00:26:44.723 ======================================================== 00:26:44.723 Total : 79602.00 310.95 401.44 13.30 5398.55 00:26:44.723 00:26:44.723 11:35:13 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:44.723 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.022 Initializing NVMe Controllers 00:26:48.022 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:48.022 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:48.022 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:48.022 Initialization complete. Launching workers. 00:26:48.022 ======================================================== 00:26:48.022 Latency(us) 00:26:48.022 Device Information : IOPS MiB/s Average min max 00:26:48.022 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9715.98 37.95 102.23 37.31 4075.53 00:26:48.022 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7170.99 28.01 139.17 52.63 4087.73 00:26:48.022 ======================================================== 00:26:48.022 Total : 16886.97 65.96 117.91 37.31 4087.73 00:26:48.022 00:26:48.022 11:35:16 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:48.022 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.387 Initializing NVMe Controllers 00:26:51.387 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.387 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:51.387 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:51.387 Initialization complete. Launching workers. 00:26:51.387 ======================================================== 00:26:51.387 Latency(us) 00:26:51.387 Device Information : IOPS MiB/s Average min max 00:26:51.387 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20341.53 79.46 1573.37 401.78 5929.64 00:26:51.387 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.91 15.75 7979.34 7038.23 8725.58 00:26:51.387 ======================================================== 00:26:51.387 Total : 24373.44 95.21 2633.06 401.78 8725.58 00:26:51.387 00:26:51.387 11:35:20 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:26:51.387 11:35:20 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:51.387 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.590 Initializing NVMe Controllers 00:26:55.590 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.590 Controller IO queue size 128, less than required. 00:26:55.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:55.590 Controller IO queue size 128, less than required. 00:26:55.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:55.590 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:55.590 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:55.590 Initialization complete. Launching workers. 00:26:55.590 ======================================================== 00:26:55.590 Latency(us) 00:26:55.590 Device Information : IOPS MiB/s Average min max 00:26:55.590 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4311.36 1077.84 29695.12 10551.04 80403.08 00:26:55.590 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4396.84 1099.21 28786.63 14656.86 52138.71 00:26:55.590 ======================================================== 00:26:55.590 Total : 8708.20 2177.05 29236.41 10551.04 80403.08 00:26:55.590 00:26:55.590 11:35:24 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:26:55.850 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.112 No valid NVMe controllers or AIO or URING devices found 00:26:56.112 Initializing NVMe Controllers 00:26:56.112 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.112 Controller IO queue size 128, less than required. 00:26:56.112 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.112 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:56.112 Controller IO queue size 128, less than required. 00:26:56.112 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.112 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:56.112 WARNING: Some requested NVMe devices were skipped 00:26:56.112 11:35:24 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:26:56.112 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.396 Initializing NVMe Controllers 00:27:01.396 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.396 Controller IO queue size 128, less than required. 00:27:01.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:01.396 Controller IO queue size 128, less than required. 00:27:01.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:01.396 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:01.396 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:01.396 Initialization complete. Launching workers. 00:27:01.396 00:27:01.396 ==================== 00:27:01.396 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:01.396 RDMA transport: 00:27:01.396 dev name: mlx5_0 00:27:01.396 polls: 264082 00:27:01.396 idle_polls: 260123 00:27:01.396 completions: 54066 00:27:01.396 queued_requests: 1 00:27:01.396 total_send_wrs: 27033 00:27:01.396 send_doorbell_updates: 3526 00:27:01.397 total_recv_wrs: 27160 00:27:01.397 recv_doorbell_updates: 3531 00:27:01.397 --------------------------------- 00:27:01.397 00:27:01.397 ==================== 00:27:01.397 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:01.397 RDMA transport: 00:27:01.397 dev name: mlx5_0 00:27:01.397 polls: 267395 00:27:01.397 idle_polls: 267127 00:27:01.397 completions: 17778 00:27:01.397 queued_requests: 1 00:27:01.397 total_send_wrs: 8889 00:27:01.397 send_doorbell_updates: 254 00:27:01.397 total_recv_wrs: 9016 00:27:01.397 recv_doorbell_updates: 255 00:27:01.397 --------------------------------- 00:27:01.397 ======================================================== 00:27:01.397 Latency(us) 00:27:01.397 Device Information : IOPS MiB/s Average min max 00:27:01.397 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6757.00 1689.25 18896.00 8294.43 61090.79 00:27:01.397 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2221.67 555.42 57175.49 27790.50 81307.01 00:27:01.397 ======================================================== 00:27:01.397 Total : 8978.68 2244.67 28367.83 8294.43 81307.01 00:27:01.397 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:01.397 rmmod nvme_rdma 00:27:01.397 rmmod nvme_fabrics 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3728889 ']' 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3728889 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 3728889 ']' 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 3728889 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3728889 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3728889' 00:27:01.397 killing process with pid 3728889 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@968 -- # kill 3728889 00:27:01.397 11:35:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@973 -- # wait 3728889 00:27:02.779 11:35:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.779 11:35:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:02.779 00:27:02.779 real 0m29.634s 00:27:02.779 user 1m32.061s 00:27:02.779 sys 0m6.147s 00:27:02.779 11:35:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:02.779 11:35:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:02.779 ************************************ 00:27:02.779 END TEST nvmf_perf 00:27:02.779 ************************************ 00:27:02.779 11:35:31 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:02.779 11:35:31 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:02.779 11:35:31 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:02.780 11:35:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:02.780 ************************************ 00:27:02.780 START TEST nvmf_fio_host 00:27:02.780 ************************************ 00:27:02.780 11:35:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:03.040 * Looking for test storage... 00:27:03.040 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:03.040 11:35:31 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:03.040 11:35:31 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.040 11:35:31 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.040 11:35:31 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.040 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.041 11:35:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:27:09.627 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:27:09.627 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:27:09.627 Found net devices under 0000:98:00.0: mlx_0_0 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:27:09.627 Found net devices under 0000:98:00.1: mlx_0_1 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:09.627 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:09.889 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:09.889 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:27:09.889 altname enp152s0f0np0 00:27:09.889 altname ens817f0np0 00:27:09.889 inet 192.168.100.8/24 scope global mlx_0_0 00:27:09.889 valid_lft forever preferred_lft forever 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:09.889 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:09.889 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:27:09.889 altname enp152s0f1np1 00:27:09.889 altname ens817f1np1 00:27:09.889 inet 192.168.100.9/24 scope global mlx_0_1 00:27:09.889 valid_lft forever preferred_lft forever 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:09.889 192.168.100.9' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:09.889 192.168.100.9' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:09.889 192.168.100.9' 00:27:09.889 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3736965 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3736965 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 3736965 ']' 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:09.890 11:35:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.151 [2024-06-10 11:35:38.870555] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:27:10.151 [2024-06-10 11:35:38.870608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.151 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.151 [2024-06-10 11:35:38.930473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.151 [2024-06-10 11:35:38.995900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.151 [2024-06-10 11:35:38.995938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.151 [2024-06-10 11:35:38.995946] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.151 [2024-06-10 11:35:38.995953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.151 [2024-06-10 11:35:38.995958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.151 [2024-06-10 11:35:38.996120] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.151 [2024-06-10 11:35:38.996241] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.151 [2024-06-10 11:35:38.996394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.152 [2024-06-10 11:35:38.996395] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.724 11:35:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:10.724 11:35:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:27:10.724 11:35:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:10.985 [2024-06-10 11:35:39.809953] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12a00b0/0x12a45a0) succeed. 00:27:10.985 [2024-06-10 11:35:39.823162] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12a16f0/0x12e5c30) succeed. 00:27:11.246 11:35:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:11.246 11:35:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:11.246 11:35:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.246 11:35:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:11.246 Malloc1 00:27:11.246 11:35:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.508 11:35:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:11.769 11:35:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:11.769 [2024-06-10 11:35:40.675804] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:11.769 11:35:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:12.030 11:35:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:12.291 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:12.291 fio-3.35 00:27:12.291 Starting 1 thread 00:27:12.552 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.096 00:27:15.096 test: (groupid=0, jobs=1): err= 0: pid=3737501: Mon Jun 10 11:35:43 2024 00:27:15.096 read: IOPS=18.6k, BW=72.7MiB/s (76.2MB/s)(146MiB/2003msec) 00:27:15.096 slat (nsec): min=2054, max=33218, avg=2145.90, stdev=470.71 00:27:15.096 clat (usec): min=2737, max=5634, avg=3419.32, stdev=611.81 00:27:15.096 lat (usec): min=2762, max=5636, avg=3421.46, stdev=611.85 00:27:15.096 clat percentiles (usec): 00:27:15.096 | 1.00th=[ 2802], 5.00th=[ 3032], 10.00th=[ 3032], 20.00th=[ 3032], 00:27:15.096 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3064], 00:27:15.096 | 70.00th=[ 3097], 80.00th=[ 4424], 90.00th=[ 4424], 95.00th=[ 4424], 00:27:15.096 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 4883], 99.95th=[ 5014], 00:27:15.096 | 99.99th=[ 5604] 00:27:15.096 bw ( KiB/s): min=56632, max=84056, per=99.96%, avg=74436.00, stdev=12823.63, samples=4 00:27:15.096 iops : min=14158, max=21014, avg=18609.00, stdev=3205.91, samples=4 00:27:15.096 write: IOPS=18.6k, BW=72.8MiB/s (76.3MB/s)(146MiB/2003msec); 0 zone resets 00:27:15.096 slat (nsec): min=2127, max=30209, avg=2254.70, stdev=503.63 00:27:15.096 clat (usec): min=2755, max=5643, avg=3416.82, stdev=611.48 00:27:15.096 lat (usec): min=2757, max=5645, avg=3419.07, stdev=611.53 00:27:15.096 clat percentiles (usec): 00:27:15.096 | 1.00th=[ 2802], 5.00th=[ 3032], 10.00th=[ 3032], 20.00th=[ 3032], 00:27:15.096 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3064], 00:27:15.096 | 70.00th=[ 3097], 80.00th=[ 4424], 90.00th=[ 4424], 95.00th=[ 4424], 00:27:15.096 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 4883], 99.95th=[ 5145], 00:27:15.096 | 99.99th=[ 5604] 00:27:15.096 bw ( KiB/s): min=56904, max=84032, per=99.99%, avg=74496.00, stdev=12767.71, samples=4 00:27:15.096 iops : min=14226, max=21008, avg=18624.00, stdev=3191.93, samples=4 00:27:15.096 lat (msec) : 4=73.32%, 10=26.68% 00:27:15.096 cpu : usr=99.65%, sys=0.00%, ctx=15, majf=0, minf=3 00:27:15.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:15.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:15.096 issued rwts: total=37287,37306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.096 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:15.096 00:27:15.096 Run status group 0 (all jobs): 00:27:15.096 READ: bw=72.7MiB/s (76.2MB/s), 72.7MiB/s-72.7MiB/s (76.2MB/s-76.2MB/s), io=146MiB (153MB), run=2003-2003msec 00:27:15.096 WRITE: bw=72.8MiB/s (76.3MB/s), 72.8MiB/s-72.8MiB/s (76.3MB/s-76.3MB/s), io=146MiB (153MB), run=2003-2003msec 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:15.097 11:35:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:15.097 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:15.097 fio-3.35 00:27:15.097 Starting 1 thread 00:27:15.097 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.682 00:27:17.682 test: (groupid=0, jobs=1): err= 0: pid=3738326: Mon Jun 10 11:35:46 2024 00:27:17.682 read: IOPS=12.9k, BW=202MiB/s (211MB/s)(399MiB/1979msec) 00:27:17.682 slat (nsec): min=3421, max=56822, avg=3671.72, stdev=1198.24 00:27:17.682 clat (usec): min=245, max=10551, avg=2853.86, stdev=1595.96 00:27:17.682 lat (usec): min=249, max=10574, avg=2857.54, stdev=1596.25 00:27:17.682 clat percentiles (usec): 00:27:17.682 | 1.00th=[ 799], 5.00th=[ 1156], 10.00th=[ 1336], 20.00th=[ 1549], 00:27:17.682 | 30.00th=[ 1762], 40.00th=[ 2008], 50.00th=[ 2311], 60.00th=[ 2769], 00:27:17.682 | 70.00th=[ 3359], 80.00th=[ 4080], 90.00th=[ 5538], 95.00th=[ 6259], 00:27:17.682 | 99.00th=[ 7111], 99.50th=[ 7635], 99.90th=[ 8848], 99.95th=[ 9372], 00:27:17.682 | 99.99th=[10552] 00:27:17.682 bw ( KiB/s): min=86240, max=111776, per=49.25%, avg=101720.00, stdev=12172.30, samples=4 00:27:17.682 iops : min= 5390, max= 6986, avg=6357.50, stdev=760.77, samples=4 00:27:17.682 write: IOPS=7270, BW=114MiB/s (119MB/s)(206MiB/1816msec); 0 zone resets 00:27:17.682 slat (usec): min=39, max=149, avg=41.00, stdev= 6.76 00:27:17.682 clat (usec): min=1826, max=23657, avg=12120.82, stdev=4690.47 00:27:17.682 lat (usec): min=1866, max=23697, avg=12161.83, stdev=4690.45 00:27:17.682 clat percentiles (usec): 00:27:17.682 | 1.00th=[ 3163], 5.00th=[ 4752], 10.00th=[ 5866], 20.00th=[ 7570], 00:27:17.682 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[11469], 60.00th=[14615], 00:27:17.682 | 70.00th=[15926], 80.00th=[16909], 90.00th=[17957], 95.00th=[18744], 00:27:17.682 | 99.00th=[20841], 99.50th=[21627], 99.90th=[22414], 99.95th=[22676], 00:27:17.682 | 99.99th=[23462] 00:27:17.682 bw ( KiB/s): min=92096, max=112928, per=90.02%, avg=104720.00, stdev=9887.07, samples=4 00:27:17.682 iops : min= 5756, max= 7058, avg=6545.00, stdev=617.94, samples=4 00:27:17.682 lat (usec) : 250=0.01%, 500=0.12%, 750=0.42%, 1000=1.23% 00:27:17.682 lat (msec) : 2=24.63%, 4=26.72%, 10=26.04%, 20=20.25%, 50=0.59% 00:27:17.682 cpu : usr=97.05%, sys=0.65%, ctx=183, majf=0, minf=10 00:27:17.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:27:17.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:17.682 issued rwts: total=25546,13204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.682 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:17.682 00:27:17.682 Run status group 0 (all jobs): 00:27:17.682 READ: bw=202MiB/s (211MB/s), 202MiB/s-202MiB/s (211MB/s-211MB/s), io=399MiB (419MB), run=1979-1979msec 00:27:17.682 WRITE: bw=114MiB/s (119MB/s), 114MiB/s-114MiB/s (119MB/s-119MB/s), io=206MiB (216MB), run=1816-1816msec 00:27:17.682 11:35:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.682 11:35:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:17.682 11:35:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:17.682 11:35:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:17.682 11:35:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:17.682 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.682 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:17.683 rmmod nvme_rdma 00:27:17.683 rmmod nvme_fabrics 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3736965 ']' 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3736965 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 3736965 ']' 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 3736965 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:17.683 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3736965 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3736965' 00:27:17.943 killing process with pid 3736965 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 3736965 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 3736965 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:17.943 00:27:17.943 real 0m15.181s 00:27:17.943 user 1m9.319s 00:27:17.943 sys 0m6.008s 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:17.943 11:35:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.943 ************************************ 00:27:17.943 END TEST nvmf_fio_host 00:27:17.943 ************************************ 00:27:18.204 11:35:46 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:18.204 11:35:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:18.204 11:35:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:18.204 11:35:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:18.204 ************************************ 00:27:18.204 START TEST nvmf_failover 00:27:18.204 ************************************ 00:27:18.204 11:35:46 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:18.204 * Looking for test storage... 00:27:18.204 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.204 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:27:18.205 11:35:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:27:26.350 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:27:26.350 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:27:26.350 Found net devices under 0000:98:00.0: mlx_0_0 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.350 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:27:26.351 Found net devices under 0000:98:00.1: mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:26.351 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:26.351 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:27:26.351 altname enp152s0f0np0 00:27:26.351 altname ens817f0np0 00:27:26.351 inet 192.168.100.8/24 scope global mlx_0_0 00:27:26.351 valid_lft forever preferred_lft forever 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:26.351 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:26.351 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:27:26.351 altname enp152s0f1np1 00:27:26.351 altname ens817f1np1 00:27:26.351 inet 192.168.100.9/24 scope global mlx_0_1 00:27:26.351 valid_lft forever preferred_lft forever 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:26.351 192.168.100.9' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:26.351 192.168.100.9' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:26.351 192.168.100.9' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:26.351 11:35:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3742368 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3742368 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3742368 ']' 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.351 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:26.352 [2024-06-10 11:35:54.098506] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:27:26.352 [2024-06-10 11:35:54.098577] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.352 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.352 [2024-06-10 11:35:54.178893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:26.352 [2024-06-10 11:35:54.243464] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.352 [2024-06-10 11:35:54.243498] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.352 [2024-06-10 11:35:54.243506] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.352 [2024-06-10 11:35:54.243513] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.352 [2024-06-10 11:35:54.243518] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.352 [2024-06-10 11:35:54.243627] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.352 [2024-06-10 11:35:54.243804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.352 [2024-06-10 11:35:54.244029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.352 11:35:54 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:26.352 [2024-06-10 11:35:55.113168] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x115b840/0x115fd30) succeed. 00:27:26.352 [2024-06-10 11:35:55.127169] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x115cde0/0x11a13c0) succeed. 00:27:26.352 11:35:55 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:26.612 Malloc0 00:27:26.612 11:35:55 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.873 11:35:55 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:26.873 11:35:55 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:27.134 [2024-06-10 11:35:55.881399] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:27.134 11:35:55 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:27.134 [2024-06-10 11:35:56.049679] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:27.134 11:35:56 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:27.396 [2024-06-10 11:35:56.218312] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3742944 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3742944 /var/tmp/bdevperf.sock 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3742944 ']' 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:27.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:27.396 11:35:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:28.341 11:35:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:28.341 11:35:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:27:28.341 11:35:57 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.602 NVMe0n1 00:27:28.602 11:35:57 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.602 00:27:28.602 11:35:57 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3743088 00:27:28.602 11:35:57 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:28.602 11:35:57 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:29.987 11:35:58 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:29.987 11:35:58 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:33.291 11:36:01 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:33.291 00:27:33.291 11:36:02 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:33.291 11:36:02 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:36.593 11:36:05 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:36.593 [2024-06-10 11:36:05.344200] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:36.593 11:36:05 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:37.534 11:36:06 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:37.794 11:36:06 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 3743088 00:27:44.438 0 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 3742944 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3742944 ']' 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3742944 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3742944 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3742944' 00:27:44.438 killing process with pid 3742944 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3742944 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3742944 00:27:44.438 11:36:12 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:44.438 [2024-06-10 11:35:56.292565] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:27:44.438 [2024-06-10 11:35:56.292623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742944 ] 00:27:44.438 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.438 [2024-06-10 11:35:56.351805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.438 [2024-06-10 11:35:56.415863] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.438 Running I/O for 15 seconds... 00:27:44.438 [2024-06-10 11:35:59.726286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.438 [2024-06-10 11:35:59.726558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186e00 00:27:44.438 [2024-06-10 11:35:59.726565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186e00 00:27:44.439 [2024-06-10 11:35:59.726582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186e00 00:27:44.439 [2024-06-10 11:35:59.726599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186e00 00:27:44.439 [2024-06-10 11:35:59.726616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x186e00 00:27:44.439 [2024-06-10 11:35:59.726633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:27:44.439 [2024-06-10 11:35:59.726649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186e00 00:27:44.439 [2024-06-10 11:35:59.726665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186e00 00:27:44.439 [2024-06-10 11:35:59.726682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186e00 00:27:44.439 [2024-06-10 11:35:59.726701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.726991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.726998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.439 [2024-06-10 11:35:59.727203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.439 [2024-06-10 11:35:59.727210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.440 [2024-06-10 11:35:59.727832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.440 [2024-06-10 11:35:59.727839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.727984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.727993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.728401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.441 [2024-06-10 11:35:59.728408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.730376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:44.441 [2024-06-10 11:35:59.730389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:44.441 [2024-06-10 11:35:59.730396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15176 len:8 PRP1 0x0 PRP2 0x0 00:27:44.441 [2024-06-10 11:35:59.730403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.441 [2024-06-10 11:35:59.730434] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:44.441 [2024-06-10 11:35:59.730443] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:44.441 [2024-06-10 11:35:59.730450] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.441 [2024-06-10 11:35:59.734025] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.442 [2024-06-10 11:35:59.753722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:44.442 [2024-06-10 11:35:59.802884] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:44.442 [2024-06-10 11:36:03.177547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.442 [2024-06-10 11:36:03.177619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.442 [2024-06-10 11:36:03.177636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.442 [2024-06-10 11:36:03.177652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.442 [2024-06-10 11:36:03.177679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.442 [2024-06-10 11:36:03.177695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.442 [2024-06-10 11:36:03.177711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.442 [2024-06-10 11:36:03.177727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.442 [2024-06-10 11:36:03.177744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.177985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.177994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:27:44.442 [2024-06-10 11:36:03.178137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.442 [2024-06-10 11:36:03.178147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186e00 00:27:44.443 [2024-06-10 11:36:03.178348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.443 [2024-06-10 11:36:03.178717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-06-10 11:36:03.178724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.178740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.178893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.178909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.178925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.178941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.178990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.178999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.179136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.179152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.179168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.179184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.179201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.179217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.179233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186e00 00:27:44.444 [2024-06-10 11:36:03.179250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.444 [2024-06-10 11:36:03.179341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-06-10 11:36:03.179348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:03.179364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:03.179380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:03.179648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:03.179665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:03.179681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.179690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:03.179697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.182103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:44.445 [2024-06-10 11:36:03.182116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:44.445 [2024-06-10 11:36:03.182123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78624 len:8 PRP1 0x0 PRP2 0x0 00:27:44.445 [2024-06-10 11:36:03.182130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:03.182161] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:27:44.445 [2024-06-10 11:36:03.182170] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:27:44.445 [2024-06-10 11:36:03.182178] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.445 [2024-06-10 11:36:03.185785] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.445 [2024-06-10 11:36:03.205817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:44.445 [2024-06-10 11:36:03.269296] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:44.445 [2024-06-10 11:36:07.527011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x186e00 00:27:44.445 [2024-06-10 11:36:07.527199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-06-10 11:36:07.527248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.445 [2024-06-10 11:36:07.527256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186e00 00:27:44.446 [2024-06-10 11:36:07.527857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-06-10 11:36:07.527873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.446 [2024-06-10 11:36:07.527882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.527889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.527898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.527905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.527914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.527921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.527931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.527937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.527948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.527955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.527964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.527971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.527980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.527987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.527996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186e00 00:27:44.447 [2024-06-10 11:36:07.528352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-06-10 11:36:07.528409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-06-10 11:36:07.528416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.528892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.528990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.528999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.529006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.529015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-06-10 11:36:07.529023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.529032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x186e00 00:27:44.448 [2024-06-10 11:36:07.529039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.448 [2024-06-10 11:36:07.529048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x186e00 00:27:44.449 [2024-06-10 11:36:07.529055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.449 [2024-06-10 11:36:07.529064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186e00 00:27:44.449 [2024-06-10 11:36:07.529072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.449 [2024-06-10 11:36:07.529082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186e00 00:27:44.449 [2024-06-10 11:36:07.529089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.449 [2024-06-10 11:36:07.529100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186e00 00:27:44.449 [2024-06-10 11:36:07.529107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.449 [2024-06-10 11:36:07.529117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186e00 00:27:44.449 [2024-06-10 11:36:07.529124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.449 [2024-06-10 11:36:07.529133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186e00 00:27:44.449 [2024-06-10 11:36:07.529141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.449 [2024-06-10 11:36:07.531487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:44.449 [2024-06-10 11:36:07.531499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:44.449 [2024-06-10 11:36:07.531506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:8 PRP1 0x0 PRP2 0x0 00:27:44.449 [2024-06-10 11:36:07.531514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.449 [2024-06-10 11:36:07.531545] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:27:44.449 [2024-06-10 11:36:07.531554] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:27:44.449 [2024-06-10 11:36:07.531562] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.449 [2024-06-10 11:36:07.535153] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.449 [2024-06-10 11:36:07.554674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:44.449 [2024-06-10 11:36:07.601409] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:44.449 00:27:44.449 Latency(us) 00:27:44.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.449 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:44.449 Verification LBA range: start 0x0 length 0x4000 00:27:44.449 NVMe0n1 : 15.01 13341.41 52.11 261.01 0.00 9380.11 341.33 1013623.47 00:27:44.449 =================================================================================================================== 00:27:44.449 Total : 13341.41 52.11 261.01 0.00 9380.11 341.33 1013623.47 00:27:44.449 Received shutdown signal, test time was about 15.000000 seconds 00:27:44.449 00:27:44.449 Latency(us) 00:27:44.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.449 =================================================================================================================== 00:27:44.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3746632 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3746632 /var/tmp/bdevperf.sock 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3746632 ']' 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:44.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:44.449 11:36:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:45.022 11:36:13 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:45.022 11:36:13 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:27:45.022 11:36:13 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:45.022 [2024-06-10 11:36:13.894688] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:45.022 11:36:13 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:45.283 [2024-06-10 11:36:14.067261] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:45.283 11:36:14 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:45.543 NVMe0n1 00:27:45.543 11:36:14 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:45.803 00:27:45.803 11:36:14 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:45.803 00:27:46.063 11:36:14 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:46.063 11:36:14 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:46.063 11:36:14 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:46.323 11:36:15 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:49.625 11:36:18 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.625 11:36:18 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:49.625 11:36:18 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3747652 00:27:49.625 11:36:18 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:49.625 11:36:18 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 3747652 00:27:50.569 0 00:27:50.569 11:36:19 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:50.569 [2024-06-10 11:36:12.979891] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:27:50.569 [2024-06-10 11:36:12.979947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746632 ] 00:27:50.569 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.569 [2024-06-10 11:36:13.038945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.569 [2024-06-10 11:36:13.102198] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.569 [2024-06-10 11:36:15.099977] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:50.569 [2024-06-10 11:36:15.100579] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.569 [2024-06-10 11:36:15.100609] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.569 [2024-06-10 11:36:15.129689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:50.569 [2024-06-10 11:36:15.155817] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:50.569 Running I/O for 1 seconds... 00:27:50.569 00:27:50.569 Latency(us) 00:27:50.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.569 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:50.569 Verification LBA range: start 0x0 length 0x4000 00:27:50.569 NVMe0n1 : 1.01 16649.27 65.04 0.00 0.00 7641.65 2048.00 13981.01 00:27:50.569 =================================================================================================================== 00:27:50.569 Total : 16649.27 65.04 0.00 0.00 7641.65 2048.00 13981.01 00:27:50.569 11:36:19 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:50.569 11:36:19 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:50.833 11:36:19 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:50.833 11:36:19 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:50.833 11:36:19 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:51.095 11:36:19 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:51.357 11:36:20 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 3746632 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3746632 ']' 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3746632 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3746632 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3746632' 00:27:54.660 killing process with pid 3746632 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3746632 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3746632 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:54.660 11:36:23 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:54.920 rmmod nvme_rdma 00:27:54.920 rmmod nvme_fabrics 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3742368 ']' 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3742368 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3742368 ']' 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3742368 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3742368 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3742368' 00:27:54.920 killing process with pid 3742368 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3742368 00:27:54.920 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3742368 00:27:55.181 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:55.181 11:36:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:55.181 00:27:55.181 real 0m36.966s 00:27:55.181 user 2m2.030s 00:27:55.181 sys 0m6.900s 00:27:55.181 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:55.181 11:36:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:55.181 ************************************ 00:27:55.181 END TEST nvmf_failover 00:27:55.181 ************************************ 00:27:55.181 11:36:23 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:27:55.181 11:36:23 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:55.181 11:36:23 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:55.181 11:36:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:55.181 ************************************ 00:27:55.181 START TEST nvmf_host_discovery 00:27:55.181 ************************************ 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:27:55.181 * Looking for test storage... 00:27:55.181 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.181 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.182 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.182 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.182 11:36:24 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.182 11:36:24 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:27:55.182 11:36:24 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:55.182 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:55.182 11:36:24 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:27:55.182 00:27:55.182 real 0m0.125s 00:27:55.182 user 0m0.061s 00:27:55.182 sys 0m0.071s 00:27:55.182 11:36:24 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:55.444 ************************************ 00:27:55.444 END TEST nvmf_host_discovery 00:27:55.444 ************************************ 00:27:55.444 11:36:24 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:27:55.444 11:36:24 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:55.444 11:36:24 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:55.444 11:36:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:55.444 ************************************ 00:27:55.444 START TEST nvmf_host_multipath_status 00:27:55.444 ************************************ 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:27:55.444 * Looking for test storage... 00:27:55.444 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.444 11:36:24 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:28:03.590 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:28:03.590 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:28:03.590 Found net devices under 0000:98:00.0: mlx_0_0 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:28:03.590 Found net devices under 0000:98:00.1: mlx_0_1 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:03.590 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:03.591 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.591 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:28:03.591 altname enp152s0f0np0 00:28:03.591 altname ens817f0np0 00:28:03.591 inet 192.168.100.8/24 scope global mlx_0_0 00:28:03.591 valid_lft forever preferred_lft forever 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:03.591 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.591 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:28:03.591 altname enp152s0f1np1 00:28:03.591 altname ens817f1np1 00:28:03.591 inet 192.168.100.9/24 scope global mlx_0_1 00:28:03.591 valid_lft forever preferred_lft forever 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:03.591 192.168.100.9' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:03.591 192.168.100.9' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:03.591 192.168.100.9' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3752638 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3752638 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 3752638 ']' 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:03.591 11:36:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:03.591 [2024-06-10 11:36:31.366222] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:28:03.591 [2024-06-10 11:36:31.366296] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.591 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.591 [2024-06-10 11:36:31.431623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:03.591 [2024-06-10 11:36:31.505819] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.591 [2024-06-10 11:36:31.505857] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.591 [2024-06-10 11:36:31.505866] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.591 [2024-06-10 11:36:31.505877] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.591 [2024-06-10 11:36:31.505884] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.591 [2024-06-10 11:36:31.506045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.591 [2024-06-10 11:36:31.506166] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3752638 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:03.592 [2024-06-10 11:36:32.339489] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ab8a20/0x1abcf10) succeed. 00:28:03.592 [2024-06-10 11:36:32.352661] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ab9f20/0x1afe5a0) succeed. 00:28:03.592 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:03.853 Malloc0 00:28:03.853 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:03.853 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.114 11:36:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:04.114 [2024-06-10 11:36:33.003383] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:04.115 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:04.375 [2024-06-10 11:36:33.143491] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:04.375 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3753033 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3753033 /var/tmp/bdevperf.sock 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 3753033 ']' 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:04.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:04.376 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:05.319 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:05.319 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:28:05.319 11:36:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:05.319 11:36:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:28:05.580 Nvme0n1 00:28:05.580 11:36:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:05.841 Nvme0n1 00:28:05.841 11:36:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:05.841 11:36:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:07.782 11:36:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:07.783 11:36:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:28:08.043 11:36:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:28:08.043 11:36:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:09.429 11:36:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:09.429 11:36:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:09.429 11:36:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.429 11:36:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:09.429 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.429 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:09.429 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.429 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:09.429 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:09.429 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:09.429 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.429 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:09.691 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.691 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:09.691 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.691 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:09.951 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.951 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:09.952 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.952 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:09.952 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.952 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:09.952 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.952 11:36:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:10.211 11:36:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.211 11:36:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:10.211 11:36:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:28:10.211 11:36:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:28:10.472 11:36:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:11.413 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:11.413 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:11.413 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.413 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:11.674 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:11.674 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:11.674 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.674 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:11.935 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.935 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:11.935 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.935 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:11.935 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.935 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:11.935 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.935 11:36:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:12.196 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.196 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:12.196 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.196 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:12.457 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.457 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:12.457 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.457 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:12.457 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.457 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:12.457 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:28:12.719 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:28:12.981 11:36:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:13.920 11:36:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:13.920 11:36:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:13.920 11:36:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.920 11:36:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:13.920 11:36:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.920 11:36:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:13.920 11:36:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:13.920 11:36:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.180 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:14.180 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:14.180 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.180 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:14.440 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.440 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:14.440 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.440 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:14.440 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.440 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:14.440 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.440 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:14.701 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.701 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:14.701 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.701 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:14.962 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.962 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:14.962 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:28:14.962 11:36:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:28:15.223 11:36:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:16.165 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:16.165 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:16.165 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.165 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:16.426 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.426 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:16.426 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.426 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:16.426 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:16.426 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:16.426 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.426 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:16.686 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.686 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:16.686 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.686 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:16.947 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.947 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:16.947 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:16.947 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.947 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.947 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:16.947 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.947 11:36:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:17.209 11:36:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:17.209 11:36:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:17.209 11:36:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:28:17.470 11:36:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:28:17.470 11:36:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.855 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:19.116 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.116 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:19.116 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.116 11:36:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:19.378 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.378 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:19.378 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.378 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:19.378 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:19.378 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:19.378 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.378 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:19.639 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:19.639 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:19.639 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:28:19.639 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:28:19.901 11:36:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:20.844 11:36:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:20.844 11:36:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:20.844 11:36:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.844 11:36:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:21.105 11:36:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:21.105 11:36:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:21.105 11:36:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.105 11:36:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:21.367 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.367 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:21.367 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.367 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:21.367 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.367 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:21.367 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.367 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:21.628 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.628 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:21.628 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.628 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:21.890 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:21.890 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:21.890 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.890 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:21.890 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.890 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:22.165 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:22.165 11:36:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:28:22.165 11:36:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:28:22.470 11:36:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:23.412 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:23.412 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:23.412 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.412 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:23.673 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.673 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:23.673 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.673 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:23.673 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.673 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:23.673 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.673 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:23.933 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.933 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:23.933 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.933 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:24.195 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.195 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:24.195 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.195 11:36:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:24.195 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.195 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:24.195 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.195 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:24.455 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.455 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:24.455 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:28:24.717 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:28:24.717 11:36:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:25.659 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:25.659 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:25.659 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.659 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:25.920 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:25.920 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:25.920 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.920 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:26.180 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.180 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:26.180 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.180 11:36:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:26.180 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.180 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:26.180 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.180 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:26.441 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.441 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:26.441 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:26.441 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.702 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.702 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:26.702 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.702 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:26.702 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.702 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:26.702 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:28:26.963 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:28:26.963 11:36:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:28.349 11:36:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:28.349 11:36:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:28.349 11:36:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.349 11:36:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:28.349 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.349 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:28.349 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.349 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:28.349 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.349 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:28.349 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.349 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:28.610 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.610 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:28.610 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.610 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:28.871 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.871 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:28.871 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.871 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:28.871 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.871 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:28.871 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.871 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:29.131 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.131 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:29.131 11:36:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:28:29.131 11:36:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:28:29.392 11:36:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:30.347 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:30.347 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:30.347 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.347 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:30.608 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.608 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:30.608 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.608 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:30.869 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:30.869 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:30.869 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.869 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:30.869 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.869 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:30.869 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.869 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:31.130 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.130 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:31.130 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.130 11:36:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3753033 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 3753033 ']' 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 3753033 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3753033 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3753033' 00:28:31.391 killing process with pid 3753033 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 3753033 00:28:31.391 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 3753033 00:28:31.658 Connection closed with partial response: 00:28:31.658 00:28:31.658 00:28:31.658 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3753033 00:28:31.658 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:31.658 [2024-06-10 11:36:33.205048] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:28:31.658 [2024-06-10 11:36:33.205106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753033 ] 00:28:31.658 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.658 [2024-06-10 11:36:33.255266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.658 [2024-06-10 11:36:33.307399] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.658 Running I/O for 90 seconds... 00:28:31.658 [2024-06-10 11:36:46.233793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.658 [2024-06-10 11:36:46.233826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:31.658 [2024-06-10 11:36:46.233855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.658 [2024-06-10 11:36:46.233861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:31.658 [2024-06-10 11:36:46.234127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186e00 00:28:31.658 [2024-06-10 11:36:46.234134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:31.658 [2024-06-10 11:36:46.234143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:28:31.658 [2024-06-10 11:36:46.234148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.658 [2024-06-10 11:36:46.234156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186e00 00:28:31.658 [2024-06-10 11:36:46.234162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.658 [2024-06-10 11:36:46.234169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:28:31.658 [2024-06-10 11:36:46.234175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.658 [2024-06-10 11:36:46.234183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186e00 00:28:31.658 [2024-06-10 11:36:46.234188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:31.658 [2024-06-10 11:36:46.234195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:28:31.658 [2024-06-10 11:36:46.234200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:31.658 [2024-06-10 11:36:46.234208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186e00 00:28:31.658 [2024-06-10 11:36:46.234213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.659 [2024-06-10 11:36:46.234368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:31.659 [2024-06-10 11:36:46.234947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186e00 00:28:31.659 [2024-06-10 11:36:46.234952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.234961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.234968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.234977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.234982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.234992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.234997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.660 [2024-06-10 11:36:46.235709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x186e00 00:28:31.660 [2024-06-10 11:36:46.235714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.235992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.235997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x186e00 00:28:31.661 [2024-06-10 11:36:46.236502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:31.661 [2024-06-10 11:36:46.236518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.236756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.236773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.245131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.245170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.245176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:46.245190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:46.245195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.662 [2024-06-10 11:36:58.226182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.662 [2024-06-10 11:36:58.226241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.662 [2024-06-10 11:36:58.226253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.662 [2024-06-10 11:36:58.226295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.662 [2024-06-10 11:36:58.226319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.662 [2024-06-10 11:36:58.226344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.662 [2024-06-10 11:36:58.226619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.662 [2024-06-10 11:36:58.226644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:31.662 [2024-06-10 11:36:58.226652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:28:31.662 [2024-06-10 11:36:58.226658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.226980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186e00 00:28:31.663 [2024-06-10 11:36:58.226992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.226999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.227004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.227070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.227077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.227084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.663 [2024-06-10 11:36:58.227089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:31.663 [2024-06-10 11:36:58.227098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.227115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.227127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.227164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.227213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.227249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.227261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.227273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.227325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.227717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.227722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.228334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.228343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.228351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.228356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.228363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.228368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.228376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.664 [2024-06-10 11:36:58.228380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.228388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x186e00 00:28:31.664 [2024-06-10 11:36:58.228393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:31.664 [2024-06-10 11:36:58.228401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.228806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.228813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.228818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.229390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.229403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.229415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.229427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.229440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.229452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.229466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.665 [2024-06-10 11:36:58.229479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.229492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:31.665 [2024-06-10 11:36:58.229499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186e00 00:28:31.665 [2024-06-10 11:36:58.229504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.229516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.229554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.229579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.229603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.229616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.229689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.229696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.229701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.230551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.230559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.230567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.230572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.231202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.231216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.231291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.231304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.231316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.231341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.666 [2024-06-10 11:36:58.231353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:28:31.666 [2024-06-10 11:36:58.231403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:31.666 [2024-06-10 11:36:58.231410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.231415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.231428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.231464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.231552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.231559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.231564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.240367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.240380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.240428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.240440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.240452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.240464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.240471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.240477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.241703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186e00 00:28:31.667 [2024-06-10 11:36:58.241715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.241732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.241738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:31.667 [2024-06-10 11:36:58.241745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.667 [2024-06-10 11:36:58.241750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.241758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x186e00 00:28:31.668 [2024-06-10 11:36:58.241767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.241775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.668 [2024-06-10 11:36:58.241780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.241787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.668 [2024-06-10 11:36:58.241792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.241799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:28:31.668 [2024-06-10 11:36:58.241804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.241812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x186e00 00:28:31.668 [2024-06-10 11:36:58.241817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.241824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.668 [2024-06-10 11:36:58.241829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.241836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:28:31.668 [2024-06-10 11:36:58.241841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.242030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186e00 00:28:31.668 [2024-06-10 11:36:58.242037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.242045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186e00 00:28:31.668 [2024-06-10 11:36:58.242050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.242058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x186e00 00:28:31.668 [2024-06-10 11:36:58.242064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.242071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.668 [2024-06-10 11:36:58.242076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.242083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.668 [2024-06-10 11:36:58.242088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.242095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.668 [2024-06-10 11:36:58.242100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:31.668 [2024-06-10 11:36:58.242107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.668 [2024-06-10 11:36:58.242112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.669 [2024-06-10 11:36:58.242382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:31.669 [2024-06-10 11:36:58.242389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:28:31.669 [2024-06-10 11:36:58.242394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.669 Received shutdown signal, test time was about 25.587575 seconds 00:28:31.669 00:28:31.669 Latency(us) 00:28:31.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.669 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:31.669 Verification LBA range: start 0x0 length 0x4000 00:28:31.669 Nvme0n1 : 25.59 15625.46 61.04 0.00 0.00 8172.95 68.27 3019898.88 00:28:31.669 =================================================================================================================== 00:28:31.669 Total : 15625.46 61.04 0.00 0.00 8172.95 68.27 3019898.88 00:28:31.669 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:31.931 rmmod nvme_rdma 00:28:31.931 rmmod nvme_fabrics 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3752638 ']' 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3752638 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 3752638 ']' 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 3752638 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3752638 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3752638' 00:28:31.931 killing process with pid 3752638 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 3752638 00:28:31.931 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 3752638 00:28:32.192 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.192 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:32.192 00:28:32.192 real 0m36.747s 00:28:32.192 user 1m42.708s 00:28:32.192 sys 0m8.312s 00:28:32.192 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:32.192 11:37:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:32.192 ************************************ 00:28:32.192 END TEST nvmf_host_multipath_status 00:28:32.192 ************************************ 00:28:32.192 11:37:01 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:32.192 11:37:01 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:32.192 11:37:01 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:32.192 11:37:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:32.192 ************************************ 00:28:32.192 START TEST nvmf_discovery_remove_ifc 00:28:32.192 ************************************ 00:28:32.192 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:32.192 * Looking for test storage... 00:28:32.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.193 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:32.455 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:32.455 00:28:32.455 real 0m0.128s 00:28:32.455 user 0m0.053s 00:28:32.455 sys 0m0.083s 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:32.455 11:37:01 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.455 ************************************ 00:28:32.455 END TEST nvmf_discovery_remove_ifc 00:28:32.455 ************************************ 00:28:32.455 11:37:01 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:28:32.455 11:37:01 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:32.455 11:37:01 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:32.455 11:37:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:32.455 ************************************ 00:28:32.455 START TEST nvmf_identify_kernel_target 00:28:32.455 ************************************ 00:28:32.455 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:28:32.455 * Looking for test storage... 00:28:32.455 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:32.455 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.455 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:32.455 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.455 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.455 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.455 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.455 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.456 11:37:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:28:40.600 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:28:40.600 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:28:40.600 Found net devices under 0000:98:00.0: mlx_0_0 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:28:40.600 Found net devices under 0000:98:00.1: mlx_0_1 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.600 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:40.601 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:40.601 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:28:40.601 altname enp152s0f0np0 00:28:40.601 altname ens817f0np0 00:28:40.601 inet 192.168.100.8/24 scope global mlx_0_0 00:28:40.601 valid_lft forever preferred_lft forever 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:40.601 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:40.601 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:28:40.601 altname enp152s0f1np1 00:28:40.601 altname ens817f1np1 00:28:40.601 inet 192.168.100.9/24 scope global mlx_0_1 00:28:40.601 valid_lft forever preferred_lft forever 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:40.601 192.168.100.9' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:40.601 192.168.100.9' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:40.601 192.168.100.9' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:40.601 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:40.602 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:40.602 11:37:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:28:43.199 Waiting for block devices as requested 00:28:43.199 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.199 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.199 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.199 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:43.199 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:43.199 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:43.199 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:43.458 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:43.458 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:43.718 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.718 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.718 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.718 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:43.979 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:43.979 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:43.979 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:43.979 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:44.551 No valid GPT data, bailing 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:44.551 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 192.168.100.8 -t rdma -s 4420 00:28:44.813 00:28:44.813 Discovery Log Number of Records 2, Generation counter 2 00:28:44.813 =====Discovery Log Entry 0====== 00:28:44.813 trtype: rdma 00:28:44.813 adrfam: ipv4 00:28:44.813 subtype: current discovery subsystem 00:28:44.813 treq: not specified, sq flow control disable supported 00:28:44.813 portid: 1 00:28:44.813 trsvcid: 4420 00:28:44.813 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:44.813 traddr: 192.168.100.8 00:28:44.813 eflags: none 00:28:44.813 rdma_prtype: not specified 00:28:44.813 rdma_qptype: connected 00:28:44.813 rdma_cms: rdma-cm 00:28:44.813 rdma_pkey: 0x0000 00:28:44.813 =====Discovery Log Entry 1====== 00:28:44.813 trtype: rdma 00:28:44.813 adrfam: ipv4 00:28:44.813 subtype: nvme subsystem 00:28:44.813 treq: not specified, sq flow control disable supported 00:28:44.813 portid: 1 00:28:44.813 trsvcid: 4420 00:28:44.813 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:44.813 traddr: 192.168.100.8 00:28:44.813 eflags: none 00:28:44.813 rdma_prtype: not specified 00:28:44.813 rdma_qptype: connected 00:28:44.813 rdma_cms: rdma-cm 00:28:44.813 rdma_pkey: 0x0000 00:28:44.813 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:28:44.813 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:44.813 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.813 ===================================================== 00:28:44.813 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:44.813 ===================================================== 00:28:44.813 Controller Capabilities/Features 00:28:44.814 ================================ 00:28:44.814 Vendor ID: 0000 00:28:44.814 Subsystem Vendor ID: 0000 00:28:44.814 Serial Number: b35967e706aaa9d3bf12 00:28:44.814 Model Number: Linux 00:28:44.814 Firmware Version: 6.7.0-68 00:28:44.814 Recommended Arb Burst: 0 00:28:44.814 IEEE OUI Identifier: 00 00 00 00:28:44.814 Multi-path I/O 00:28:44.814 May have multiple subsystem ports: No 00:28:44.814 May have multiple controllers: No 00:28:44.814 Associated with SR-IOV VF: No 00:28:44.814 Max Data Transfer Size: Unlimited 00:28:44.814 Max Number of Namespaces: 0 00:28:44.814 Max Number of I/O Queues: 1024 00:28:44.814 NVMe Specification Version (VS): 1.3 00:28:44.814 NVMe Specification Version (Identify): 1.3 00:28:44.814 Maximum Queue Entries: 128 00:28:44.814 Contiguous Queues Required: No 00:28:44.814 Arbitration Mechanisms Supported 00:28:44.814 Weighted Round Robin: Not Supported 00:28:44.814 Vendor Specific: Not Supported 00:28:44.814 Reset Timeout: 7500 ms 00:28:44.814 Doorbell Stride: 4 bytes 00:28:44.814 NVM Subsystem Reset: Not Supported 00:28:44.814 Command Sets Supported 00:28:44.814 NVM Command Set: Supported 00:28:44.814 Boot Partition: Not Supported 00:28:44.814 Memory Page Size Minimum: 4096 bytes 00:28:44.814 Memory Page Size Maximum: 4096 bytes 00:28:44.814 Persistent Memory Region: Not Supported 00:28:44.814 Optional Asynchronous Events Supported 00:28:44.814 Namespace Attribute Notices: Not Supported 00:28:44.814 Firmware Activation Notices: Not Supported 00:28:44.814 ANA Change Notices: Not Supported 00:28:44.814 PLE Aggregate Log Change Notices: Not Supported 00:28:44.814 LBA Status Info Alert Notices: Not Supported 00:28:44.814 EGE Aggregate Log Change Notices: Not Supported 00:28:44.814 Normal NVM Subsystem Shutdown event: Not Supported 00:28:44.814 Zone Descriptor Change Notices: Not Supported 00:28:44.814 Discovery Log Change Notices: Supported 00:28:44.814 Controller Attributes 00:28:44.814 128-bit Host Identifier: Not Supported 00:28:44.814 Non-Operational Permissive Mode: Not Supported 00:28:44.814 NVM Sets: Not Supported 00:28:44.814 Read Recovery Levels: Not Supported 00:28:44.814 Endurance Groups: Not Supported 00:28:44.814 Predictable Latency Mode: Not Supported 00:28:44.814 Traffic Based Keep ALive: Not Supported 00:28:44.814 Namespace Granularity: Not Supported 00:28:44.814 SQ Associations: Not Supported 00:28:44.814 UUID List: Not Supported 00:28:44.814 Multi-Domain Subsystem: Not Supported 00:28:44.814 Fixed Capacity Management: Not Supported 00:28:44.814 Variable Capacity Management: Not Supported 00:28:44.814 Delete Endurance Group: Not Supported 00:28:44.814 Delete NVM Set: Not Supported 00:28:44.814 Extended LBA Formats Supported: Not Supported 00:28:44.814 Flexible Data Placement Supported: Not Supported 00:28:44.814 00:28:44.814 Controller Memory Buffer Support 00:28:44.814 ================================ 00:28:44.814 Supported: No 00:28:44.814 00:28:44.814 Persistent Memory Region Support 00:28:44.814 ================================ 00:28:44.814 Supported: No 00:28:44.814 00:28:44.814 Admin Command Set Attributes 00:28:44.814 ============================ 00:28:44.814 Security Send/Receive: Not Supported 00:28:44.814 Format NVM: Not Supported 00:28:44.814 Firmware Activate/Download: Not Supported 00:28:44.814 Namespace Management: Not Supported 00:28:44.814 Device Self-Test: Not Supported 00:28:44.814 Directives: Not Supported 00:28:44.814 NVMe-MI: Not Supported 00:28:44.814 Virtualization Management: Not Supported 00:28:44.814 Doorbell Buffer Config: Not Supported 00:28:44.814 Get LBA Status Capability: Not Supported 00:28:44.814 Command & Feature Lockdown Capability: Not Supported 00:28:44.814 Abort Command Limit: 1 00:28:44.814 Async Event Request Limit: 1 00:28:44.814 Number of Firmware Slots: N/A 00:28:44.814 Firmware Slot 1 Read-Only: N/A 00:28:44.814 Firmware Activation Without Reset: N/A 00:28:44.814 Multiple Update Detection Support: N/A 00:28:44.814 Firmware Update Granularity: No Information Provided 00:28:44.814 Per-Namespace SMART Log: No 00:28:44.814 Asymmetric Namespace Access Log Page: Not Supported 00:28:44.814 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:44.814 Command Effects Log Page: Not Supported 00:28:44.814 Get Log Page Extended Data: Supported 00:28:44.814 Telemetry Log Pages: Not Supported 00:28:44.814 Persistent Event Log Pages: Not Supported 00:28:44.814 Supported Log Pages Log Page: May Support 00:28:44.814 Commands Supported & Effects Log Page: Not Supported 00:28:44.814 Feature Identifiers & Effects Log Page:May Support 00:28:44.814 NVMe-MI Commands & Effects Log Page: May Support 00:28:44.814 Data Area 4 for Telemetry Log: Not Supported 00:28:44.814 Error Log Page Entries Supported: 1 00:28:44.814 Keep Alive: Not Supported 00:28:44.814 00:28:44.814 NVM Command Set Attributes 00:28:44.814 ========================== 00:28:44.814 Submission Queue Entry Size 00:28:44.814 Max: 1 00:28:44.814 Min: 1 00:28:44.814 Completion Queue Entry Size 00:28:44.814 Max: 1 00:28:44.814 Min: 1 00:28:44.814 Number of Namespaces: 0 00:28:44.814 Compare Command: Not Supported 00:28:44.814 Write Uncorrectable Command: Not Supported 00:28:44.814 Dataset Management Command: Not Supported 00:28:44.814 Write Zeroes Command: Not Supported 00:28:44.814 Set Features Save Field: Not Supported 00:28:44.814 Reservations: Not Supported 00:28:44.814 Timestamp: Not Supported 00:28:44.814 Copy: Not Supported 00:28:44.814 Volatile Write Cache: Not Present 00:28:44.814 Atomic Write Unit (Normal): 1 00:28:44.814 Atomic Write Unit (PFail): 1 00:28:44.814 Atomic Compare & Write Unit: 1 00:28:44.814 Fused Compare & Write: Not Supported 00:28:44.814 Scatter-Gather List 00:28:44.814 SGL Command Set: Supported 00:28:44.814 SGL Keyed: Supported 00:28:44.814 SGL Bit Bucket Descriptor: Not Supported 00:28:44.814 SGL Metadata Pointer: Not Supported 00:28:44.814 Oversized SGL: Not Supported 00:28:44.814 SGL Metadata Address: Not Supported 00:28:44.814 SGL Offset: Supported 00:28:44.814 Transport SGL Data Block: Not Supported 00:28:44.814 Replay Protected Memory Block: Not Supported 00:28:44.814 00:28:44.814 Firmware Slot Information 00:28:44.814 ========================= 00:28:44.814 Active slot: 0 00:28:44.814 00:28:44.814 00:28:44.814 Error Log 00:28:44.814 ========= 00:28:44.814 00:28:44.814 Active Namespaces 00:28:44.814 ================= 00:28:44.814 Discovery Log Page 00:28:44.814 ================== 00:28:44.814 Generation Counter: 2 00:28:44.814 Number of Records: 2 00:28:44.814 Record Format: 0 00:28:44.814 00:28:44.814 Discovery Log Entry 0 00:28:44.814 ---------------------- 00:28:44.814 Transport Type: 1 (RDMA) 00:28:44.814 Address Family: 1 (IPv4) 00:28:44.814 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:44.814 Entry Flags: 00:28:44.814 Duplicate Returned Information: 0 00:28:44.814 Explicit Persistent Connection Support for Discovery: 0 00:28:44.814 Transport Requirements: 00:28:44.814 Secure Channel: Not Specified 00:28:44.814 Port ID: 1 (0x0001) 00:28:44.814 Controller ID: 65535 (0xffff) 00:28:44.814 Admin Max SQ Size: 32 00:28:44.814 Transport Service Identifier: 4420 00:28:44.814 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:44.814 Transport Address: 192.168.100.8 00:28:44.814 Transport Specific Address Subtype - RDMA 00:28:44.814 RDMA QP Service Type: 1 (Reliable Connected) 00:28:44.814 RDMA Provider Type: 1 (No provider specified) 00:28:44.814 RDMA CM Service: 1 (RDMA_CM) 00:28:44.814 Discovery Log Entry 1 00:28:44.814 ---------------------- 00:28:44.814 Transport Type: 1 (RDMA) 00:28:44.814 Address Family: 1 (IPv4) 00:28:44.814 Subsystem Type: 2 (NVM Subsystem) 00:28:44.814 Entry Flags: 00:28:44.815 Duplicate Returned Information: 0 00:28:44.815 Explicit Persistent Connection Support for Discovery: 0 00:28:44.815 Transport Requirements: 00:28:44.815 Secure Channel: Not Specified 00:28:44.815 Port ID: 1 (0x0001) 00:28:44.815 Controller ID: 65535 (0xffff) 00:28:44.815 Admin Max SQ Size: 32 00:28:44.815 Transport Service Identifier: 4420 00:28:44.815 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:44.815 Transport Address: 192.168.100.8 00:28:44.815 Transport Specific Address Subtype - RDMA 00:28:44.815 RDMA QP Service Type: 1 (Reliable Connected) 00:28:44.815 RDMA Provider Type: 1 (No provider specified) 00:28:44.815 RDMA CM Service: 1 (RDMA_CM) 00:28:44.815 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.815 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.076 get_feature(0x01) failed 00:28:45.076 get_feature(0x02) failed 00:28:45.076 get_feature(0x04) failed 00:28:45.076 ===================================================== 00:28:45.076 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:28:45.076 ===================================================== 00:28:45.076 Controller Capabilities/Features 00:28:45.076 ================================ 00:28:45.076 Vendor ID: 0000 00:28:45.076 Subsystem Vendor ID: 0000 00:28:45.076 Serial Number: 663b88268972d1deb15a 00:28:45.076 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:45.076 Firmware Version: 6.7.0-68 00:28:45.076 Recommended Arb Burst: 6 00:28:45.076 IEEE OUI Identifier: 00 00 00 00:28:45.076 Multi-path I/O 00:28:45.076 May have multiple subsystem ports: Yes 00:28:45.076 May have multiple controllers: Yes 00:28:45.076 Associated with SR-IOV VF: No 00:28:45.076 Max Data Transfer Size: 1048576 00:28:45.076 Max Number of Namespaces: 1024 00:28:45.076 Max Number of I/O Queues: 128 00:28:45.076 NVMe Specification Version (VS): 1.3 00:28:45.076 NVMe Specification Version (Identify): 1.3 00:28:45.076 Maximum Queue Entries: 128 00:28:45.076 Contiguous Queues Required: No 00:28:45.076 Arbitration Mechanisms Supported 00:28:45.076 Weighted Round Robin: Not Supported 00:28:45.076 Vendor Specific: Not Supported 00:28:45.076 Reset Timeout: 7500 ms 00:28:45.076 Doorbell Stride: 4 bytes 00:28:45.076 NVM Subsystem Reset: Not Supported 00:28:45.076 Command Sets Supported 00:28:45.076 NVM Command Set: Supported 00:28:45.076 Boot Partition: Not Supported 00:28:45.076 Memory Page Size Minimum: 4096 bytes 00:28:45.076 Memory Page Size Maximum: 4096 bytes 00:28:45.077 Persistent Memory Region: Not Supported 00:28:45.077 Optional Asynchronous Events Supported 00:28:45.077 Namespace Attribute Notices: Supported 00:28:45.077 Firmware Activation Notices: Not Supported 00:28:45.077 ANA Change Notices: Supported 00:28:45.077 PLE Aggregate Log Change Notices: Not Supported 00:28:45.077 LBA Status Info Alert Notices: Not Supported 00:28:45.077 EGE Aggregate Log Change Notices: Not Supported 00:28:45.077 Normal NVM Subsystem Shutdown event: Not Supported 00:28:45.077 Zone Descriptor Change Notices: Not Supported 00:28:45.077 Discovery Log Change Notices: Not Supported 00:28:45.077 Controller Attributes 00:28:45.077 128-bit Host Identifier: Supported 00:28:45.077 Non-Operational Permissive Mode: Not Supported 00:28:45.077 NVM Sets: Not Supported 00:28:45.077 Read Recovery Levels: Not Supported 00:28:45.077 Endurance Groups: Not Supported 00:28:45.077 Predictable Latency Mode: Not Supported 00:28:45.077 Traffic Based Keep ALive: Supported 00:28:45.077 Namespace Granularity: Not Supported 00:28:45.077 SQ Associations: Not Supported 00:28:45.077 UUID List: Not Supported 00:28:45.077 Multi-Domain Subsystem: Not Supported 00:28:45.077 Fixed Capacity Management: Not Supported 00:28:45.077 Variable Capacity Management: Not Supported 00:28:45.077 Delete Endurance Group: Not Supported 00:28:45.077 Delete NVM Set: Not Supported 00:28:45.077 Extended LBA Formats Supported: Not Supported 00:28:45.077 Flexible Data Placement Supported: Not Supported 00:28:45.077 00:28:45.077 Controller Memory Buffer Support 00:28:45.077 ================================ 00:28:45.077 Supported: No 00:28:45.077 00:28:45.077 Persistent Memory Region Support 00:28:45.077 ================================ 00:28:45.077 Supported: No 00:28:45.077 00:28:45.077 Admin Command Set Attributes 00:28:45.077 ============================ 00:28:45.077 Security Send/Receive: Not Supported 00:28:45.077 Format NVM: Not Supported 00:28:45.077 Firmware Activate/Download: Not Supported 00:28:45.077 Namespace Management: Not Supported 00:28:45.077 Device Self-Test: Not Supported 00:28:45.077 Directives: Not Supported 00:28:45.077 NVMe-MI: Not Supported 00:28:45.077 Virtualization Management: Not Supported 00:28:45.077 Doorbell Buffer Config: Not Supported 00:28:45.077 Get LBA Status Capability: Not Supported 00:28:45.077 Command & Feature Lockdown Capability: Not Supported 00:28:45.077 Abort Command Limit: 4 00:28:45.077 Async Event Request Limit: 4 00:28:45.077 Number of Firmware Slots: N/A 00:28:45.077 Firmware Slot 1 Read-Only: N/A 00:28:45.077 Firmware Activation Without Reset: N/A 00:28:45.077 Multiple Update Detection Support: N/A 00:28:45.077 Firmware Update Granularity: No Information Provided 00:28:45.077 Per-Namespace SMART Log: Yes 00:28:45.077 Asymmetric Namespace Access Log Page: Supported 00:28:45.077 ANA Transition Time : 10 sec 00:28:45.077 00:28:45.077 Asymmetric Namespace Access Capabilities 00:28:45.077 ANA Optimized State : Supported 00:28:45.077 ANA Non-Optimized State : Supported 00:28:45.077 ANA Inaccessible State : Supported 00:28:45.077 ANA Persistent Loss State : Supported 00:28:45.077 ANA Change State : Supported 00:28:45.077 ANAGRPID is not changed : No 00:28:45.077 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:45.077 00:28:45.077 ANA Group Identifier Maximum : 128 00:28:45.077 Number of ANA Group Identifiers : 128 00:28:45.077 Max Number of Allowed Namespaces : 1024 00:28:45.077 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:45.077 Command Effects Log Page: Supported 00:28:45.077 Get Log Page Extended Data: Supported 00:28:45.077 Telemetry Log Pages: Not Supported 00:28:45.077 Persistent Event Log Pages: Not Supported 00:28:45.077 Supported Log Pages Log Page: May Support 00:28:45.077 Commands Supported & Effects Log Page: Not Supported 00:28:45.077 Feature Identifiers & Effects Log Page:May Support 00:28:45.077 NVMe-MI Commands & Effects Log Page: May Support 00:28:45.077 Data Area 4 for Telemetry Log: Not Supported 00:28:45.077 Error Log Page Entries Supported: 128 00:28:45.077 Keep Alive: Supported 00:28:45.077 Keep Alive Granularity: 1000 ms 00:28:45.077 00:28:45.077 NVM Command Set Attributes 00:28:45.077 ========================== 00:28:45.077 Submission Queue Entry Size 00:28:45.077 Max: 64 00:28:45.077 Min: 64 00:28:45.077 Completion Queue Entry Size 00:28:45.077 Max: 16 00:28:45.077 Min: 16 00:28:45.077 Number of Namespaces: 1024 00:28:45.077 Compare Command: Not Supported 00:28:45.077 Write Uncorrectable Command: Not Supported 00:28:45.077 Dataset Management Command: Supported 00:28:45.077 Write Zeroes Command: Supported 00:28:45.077 Set Features Save Field: Not Supported 00:28:45.077 Reservations: Not Supported 00:28:45.077 Timestamp: Not Supported 00:28:45.077 Copy: Not Supported 00:28:45.077 Volatile Write Cache: Present 00:28:45.077 Atomic Write Unit (Normal): 1 00:28:45.077 Atomic Write Unit (PFail): 1 00:28:45.077 Atomic Compare & Write Unit: 1 00:28:45.077 Fused Compare & Write: Not Supported 00:28:45.077 Scatter-Gather List 00:28:45.077 SGL Command Set: Supported 00:28:45.077 SGL Keyed: Supported 00:28:45.077 SGL Bit Bucket Descriptor: Not Supported 00:28:45.077 SGL Metadata Pointer: Not Supported 00:28:45.077 Oversized SGL: Not Supported 00:28:45.077 SGL Metadata Address: Not Supported 00:28:45.077 SGL Offset: Supported 00:28:45.077 Transport SGL Data Block: Not Supported 00:28:45.077 Replay Protected Memory Block: Not Supported 00:28:45.077 00:28:45.077 Firmware Slot Information 00:28:45.077 ========================= 00:28:45.077 Active slot: 0 00:28:45.077 00:28:45.077 Asymmetric Namespace Access 00:28:45.077 =========================== 00:28:45.077 Change Count : 0 00:28:45.077 Number of ANA Group Descriptors : 1 00:28:45.077 ANA Group Descriptor : 0 00:28:45.077 ANA Group ID : 1 00:28:45.077 Number of NSID Values : 1 00:28:45.077 Change Count : 0 00:28:45.077 ANA State : 1 00:28:45.077 Namespace Identifier : 1 00:28:45.077 00:28:45.077 Commands Supported and Effects 00:28:45.077 ============================== 00:28:45.077 Admin Commands 00:28:45.077 -------------- 00:28:45.077 Get Log Page (02h): Supported 00:28:45.077 Identify (06h): Supported 00:28:45.077 Abort (08h): Supported 00:28:45.077 Set Features (09h): Supported 00:28:45.077 Get Features (0Ah): Supported 00:28:45.077 Asynchronous Event Request (0Ch): Supported 00:28:45.077 Keep Alive (18h): Supported 00:28:45.077 I/O Commands 00:28:45.077 ------------ 00:28:45.077 Flush (00h): Supported 00:28:45.077 Write (01h): Supported LBA-Change 00:28:45.077 Read (02h): Supported 00:28:45.077 Write Zeroes (08h): Supported LBA-Change 00:28:45.077 Dataset Management (09h): Supported 00:28:45.077 00:28:45.077 Error Log 00:28:45.077 ========= 00:28:45.077 Entry: 0 00:28:45.077 Error Count: 0x3 00:28:45.077 Submission Queue Id: 0x0 00:28:45.077 Command Id: 0x5 00:28:45.077 Phase Bit: 0 00:28:45.077 Status Code: 0x2 00:28:45.077 Status Code Type: 0x0 00:28:45.077 Do Not Retry: 1 00:28:45.077 Error Location: 0x28 00:28:45.077 LBA: 0x0 00:28:45.077 Namespace: 0x0 00:28:45.077 Vendor Log Page: 0x0 00:28:45.077 ----------- 00:28:45.077 Entry: 1 00:28:45.077 Error Count: 0x2 00:28:45.077 Submission Queue Id: 0x0 00:28:45.077 Command Id: 0x5 00:28:45.077 Phase Bit: 0 00:28:45.077 Status Code: 0x2 00:28:45.077 Status Code Type: 0x0 00:28:45.077 Do Not Retry: 1 00:28:45.077 Error Location: 0x28 00:28:45.077 LBA: 0x0 00:28:45.077 Namespace: 0x0 00:28:45.077 Vendor Log Page: 0x0 00:28:45.077 ----------- 00:28:45.077 Entry: 2 00:28:45.077 Error Count: 0x1 00:28:45.077 Submission Queue Id: 0x0 00:28:45.077 Command Id: 0x0 00:28:45.077 Phase Bit: 0 00:28:45.077 Status Code: 0x2 00:28:45.077 Status Code Type: 0x0 00:28:45.077 Do Not Retry: 1 00:28:45.077 Error Location: 0x28 00:28:45.077 LBA: 0x0 00:28:45.077 Namespace: 0x0 00:28:45.077 Vendor Log Page: 0x0 00:28:45.077 00:28:45.077 Number of Queues 00:28:45.077 ================ 00:28:45.077 Number of I/O Submission Queues: 128 00:28:45.077 Number of I/O Completion Queues: 128 00:28:45.077 00:28:45.077 ZNS Specific Controller Data 00:28:45.077 ============================ 00:28:45.077 Zone Append Size Limit: 0 00:28:45.077 00:28:45.077 00:28:45.077 Active Namespaces 00:28:45.077 ================= 00:28:45.077 get_feature(0x05) failed 00:28:45.077 Namespace ID:1 00:28:45.077 Command Set Identifier: NVM (00h) 00:28:45.078 Deallocate: Supported 00:28:45.078 Deallocated/Unwritten Error: Not Supported 00:28:45.078 Deallocated Read Value: Unknown 00:28:45.078 Deallocate in Write Zeroes: Not Supported 00:28:45.078 Deallocated Guard Field: 0xFFFF 00:28:45.078 Flush: Supported 00:28:45.078 Reservation: Not Supported 00:28:45.078 Namespace Sharing Capabilities: Multiple Controllers 00:28:45.078 Size (in LBAs): 3750748848 (1788GiB) 00:28:45.078 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:45.078 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:45.078 UUID: 9df3e728-060c-4ce7-bb9d-4aeaff2dc8ee 00:28:45.078 Thin Provisioning: Not Supported 00:28:45.078 Per-NS Atomic Units: Yes 00:28:45.078 Atomic Write Unit (Normal): 8 00:28:45.078 Atomic Write Unit (PFail): 8 00:28:45.078 Preferred Write Granularity: 8 00:28:45.078 Atomic Compare & Write Unit: 8 00:28:45.078 Atomic Boundary Size (Normal): 0 00:28:45.078 Atomic Boundary Size (PFail): 0 00:28:45.078 Atomic Boundary Offset: 0 00:28:45.078 NGUID/EUI64 Never Reused: No 00:28:45.078 ANA group ID: 1 00:28:45.078 Namespace Write Protected: No 00:28:45.078 Number of LBA Formats: 1 00:28:45.078 Current LBA Format: LBA Format #00 00:28:45.078 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:45.078 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:45.078 rmmod nvme_rdma 00:28:45.078 rmmod nvme_fabrics 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:28:45.078 11:37:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:28:48.380 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:48.380 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:48.380 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:48.380 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:48.380 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:48.380 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:48.380 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:48.380 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:48.380 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:48.641 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:48.641 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:48.641 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:48.641 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:48.641 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:48.641 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:48.641 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:48.641 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:48.902 00:28:48.902 real 0m16.526s 00:28:48.902 user 0m5.227s 00:28:48.902 sys 0m10.312s 00:28:48.902 11:37:17 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:48.902 11:37:17 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.902 ************************************ 00:28:48.902 END TEST nvmf_identify_kernel_target 00:28:48.902 ************************************ 00:28:48.902 11:37:17 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:28:48.902 11:37:17 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:48.902 11:37:17 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:48.902 11:37:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:48.902 ************************************ 00:28:48.902 START TEST nvmf_auth_host 00:28:48.902 ************************************ 00:28:48.902 11:37:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:28:49.163 * Looking for test storage... 00:28:49.163 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.163 11:37:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:28:57.307 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:28:57.307 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:28:57.307 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:28:57.308 Found net devices under 0000:98:00.0: mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:28:57.308 Found net devices under 0000:98:00.1: mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:57.308 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:57.308 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:28:57.308 altname enp152s0f0np0 00:28:57.308 altname ens817f0np0 00:28:57.308 inet 192.168.100.8/24 scope global mlx_0_0 00:28:57.308 valid_lft forever preferred_lft forever 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:57.308 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:57.308 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:28:57.308 altname enp152s0f1np1 00:28:57.308 altname ens817f1np1 00:28:57.308 inet 192.168.100.9/24 scope global mlx_0_1 00:28:57.308 valid_lft forever preferred_lft forever 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:57.308 192.168.100.9' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:57.308 192.168.100.9' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:57.308 192.168.100.9' 00:28:57.308 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3769488 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3769488 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 3769488 ']' 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:57.309 11:37:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=44fff4f79235ea32e6da64721427e166 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Gzf 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 44fff4f79235ea32e6da64721427e166 0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 44fff4f79235ea32e6da64721427e166 0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=44fff4f79235ea32e6da64721427e166 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Gzf 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Gzf 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Gzf 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=60e70dc0b573c73f96d00c358b71141aeb77f62f13153761a13c28be51668f4b 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nz0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 60e70dc0b573c73f96d00c358b71141aeb77f62f13153761a13c28be51668f4b 3 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 60e70dc0b573c73f96d00c358b71141aeb77f62f13153761a13c28be51668f4b 3 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=60e70dc0b573c73f96d00c358b71141aeb77f62f13153761a13c28be51668f4b 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nz0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nz0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.nz0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=63b57c877117a26de5705702bd42dc750e9233547baea6d7 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uaw 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 63b57c877117a26de5705702bd42dc750e9233547baea6d7 0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 63b57c877117a26de5705702bd42dc750e9233547baea6d7 0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=63b57c877117a26de5705702bd42dc750e9233547baea6d7 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uaw 00:28:57.309 11:37:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uaw 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.uaw 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ecb83888e60bbe01e4f81305e1cf236b02def48542f40cf9 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4yV 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ecb83888e60bbe01e4f81305e1cf236b02def48542f40cf9 2 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ecb83888e60bbe01e4f81305e1cf236b02def48542f40cf9 2 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ecb83888e60bbe01e4f81305e1cf236b02def48542f40cf9 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4yV 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4yV 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4yV 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d5c91c65c196584b031ac76f2c0d1b3f 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:57.309 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IVc 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d5c91c65c196584b031ac76f2c0d1b3f 1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d5c91c65c196584b031ac76f2c0d1b3f 1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d5c91c65c196584b031ac76f2c0d1b3f 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IVc 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IVc 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.IVc 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb5efbc464949517c155b6b27551489f 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TV0 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb5efbc464949517c155b6b27551489f 1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb5efbc464949517c155b6b27551489f 1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb5efbc464949517c155b6b27551489f 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TV0 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TV0 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.TV0 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=15fceb7997a8025763a7f85c09e2855909f4d2bba5441897 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.05P 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 15fceb7997a8025763a7f85c09e2855909f4d2bba5441897 2 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 15fceb7997a8025763a7f85c09e2855909f4d2bba5441897 2 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=15fceb7997a8025763a7f85c09e2855909f4d2bba5441897 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.05P 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.05P 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.05P 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b6cdc20fc1ce94119f893cd85ee2bd8 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aTY 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b6cdc20fc1ce94119f893cd85ee2bd8 0 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b6cdc20fc1ce94119f893cd85ee2bd8 0 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b6cdc20fc1ce94119f893cd85ee2bd8 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:57.310 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aTY 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aTY 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.aTY 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9aca5413df6daf0b03d1074d33a9edea4a27b8449bc86f7f6d4ac862cb71eb02 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bn8 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9aca5413df6daf0b03d1074d33a9edea4a27b8449bc86f7f6d4ac862cb71eb02 3 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9aca5413df6daf0b03d1074d33a9edea4a27b8449bc86f7f6d4ac862cb71eb02 3 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9aca5413df6daf0b03d1074d33a9edea4a27b8449bc86f7f6d4ac862cb71eb02 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bn8 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bn8 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bn8 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3769488 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 3769488 ']' 00:28:57.571 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Gzf 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.nz0 ]] 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nz0 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.uaw 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.572 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4yV ]] 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4yV 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.IVc 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.TV0 ]] 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TV0 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:57.833 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.05P 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.aTY ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.aTY 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bn8 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:57.834 11:37:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:29:01.137 Waiting for block devices as requested 00:29:01.137 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:01.137 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:01.137 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:01.137 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:01.397 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:01.397 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:01.397 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:01.657 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:01.657 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:01.917 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:01.917 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:01.917 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:01.917 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:02.178 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:02.178 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:02.178 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:02.178 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:03.117 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:03.117 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:03.118 No valid GPT data, bailing 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:03.118 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:03.378 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 192.168.100.8 -t rdma -s 4420 00:29:03.378 00:29:03.378 Discovery Log Number of Records 2, Generation counter 2 00:29:03.378 =====Discovery Log Entry 0====== 00:29:03.378 trtype: rdma 00:29:03.378 adrfam: ipv4 00:29:03.378 subtype: current discovery subsystem 00:29:03.378 treq: not specified, sq flow control disable supported 00:29:03.378 portid: 1 00:29:03.378 trsvcid: 4420 00:29:03.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:03.378 traddr: 192.168.100.8 00:29:03.378 eflags: none 00:29:03.378 rdma_prtype: not specified 00:29:03.378 rdma_qptype: connected 00:29:03.378 rdma_cms: rdma-cm 00:29:03.379 rdma_pkey: 0x0000 00:29:03.379 =====Discovery Log Entry 1====== 00:29:03.379 trtype: rdma 00:29:03.379 adrfam: ipv4 00:29:03.379 subtype: nvme subsystem 00:29:03.379 treq: not specified, sq flow control disable supported 00:29:03.379 portid: 1 00:29:03.379 trsvcid: 4420 00:29:03.379 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:03.379 traddr: 192.168.100.8 00:29:03.379 eflags: none 00:29:03.379 rdma_prtype: not specified 00:29:03.379 rdma_qptype: connected 00:29:03.379 rdma_cms: rdma-cm 00:29:03.379 rdma_pkey: 0x0000 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.379 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.639 nvme0n1 00:29:03.639 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.639 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.639 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.639 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.639 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.639 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.899 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.900 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.160 nvme0n1 00:29:04.160 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.160 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.161 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.161 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.161 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.161 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.161 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.161 11:37:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.161 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.161 11:37:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.161 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.422 nvme0n1 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.422 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.727 nvme0n1 00:29:04.727 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.727 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.727 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.727 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.728 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.728 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.728 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.728 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.728 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.728 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.005 nvme0n1 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.005 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.267 11:37:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.267 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.528 nvme0n1 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.528 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.789 nvme0n1 00:29:05.789 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.789 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.789 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.790 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.790 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.790 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.790 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.790 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.790 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.790 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.052 11:37:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.325 nvme0n1 00:29:06.325 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.326 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.327 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.589 nvme0n1 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:06.589 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:06.850 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.850 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.111 nvme0n1 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.111 11:37:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.372 nvme0n1 00:29:07.372 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.372 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.372 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.373 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.944 nvme0n1 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:07.944 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.945 11:37:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.206 nvme0n1 00:29:08.206 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.206 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.206 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.206 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.206 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.206 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.468 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.729 nvme0n1 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.729 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:08.730 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.990 11:37:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.252 nvme0n1 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.252 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.824 nvme0n1 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:09.824 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.825 11:37:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.398 nvme0n1 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.398 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.971 nvme0n1 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:10.971 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:11.232 11:37:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:11.232 11:37:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.232 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.232 11:37:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.803 nvme0n1 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.803 11:37:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.374 nvme0n1 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:12.374 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:12.375 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.375 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.946 nvme0n1 00:29:12.946 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.947 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.208 11:37:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.147 nvme0n1 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.147 11:37:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.089 nvme0n1 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:15.089 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.090 11:37:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.030 nvme0n1 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.030 11:37:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.970 nvme0n1 00:29:16.970 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.970 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.970 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.970 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.970 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.971 11:37:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.912 nvme0n1 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:17.912 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.913 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.173 nvme0n1 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.173 11:37:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:18.173 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:18.174 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:18.174 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.174 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.434 nvme0n1 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.434 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.695 nvme0n1 00:29:18.695 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.695 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.696 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.696 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.696 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.696 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.696 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.696 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.696 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.696 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.956 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.216 nvme0n1 00:29:19.216 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.216 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.216 11:37:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.216 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.216 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.216 11:37:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.216 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.217 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.480 nvme0n1 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.480 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.775 nvme0n1 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.775 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.035 11:37:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.294 nvme0n1 00:29:20.294 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.294 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.294 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.295 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.554 nvme0n1 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.554 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.814 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.075 nvme0n1 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.075 11:37:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.336 nvme0n1 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.336 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.596 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.857 nvme0n1 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.857 11:37:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.427 nvme0n1 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.427 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.687 nvme0n1 00:29:22.687 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.687 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.687 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.687 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.687 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.688 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.948 11:37:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.208 nvme0n1 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.208 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.467 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.728 nvme0n1 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.728 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.729 11:37:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.297 nvme0n1 00:29:24.297 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.297 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.297 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.297 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.297 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.297 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.556 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.124 nvme0n1 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.124 11:37:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.693 nvme0n1 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.693 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.694 11:37:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.263 nvme0n1 00:29:26.263 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.263 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.263 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.263 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.263 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.263 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.523 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.093 nvme0n1 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:27.093 11:37:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:27.094 11:37:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.094 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.094 11:37:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.033 nvme0n1 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.033 11:37:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.976 nvme0n1 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:28.976 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.977 11:37:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.917 nvme0n1 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.917 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.918 11:37:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 nvme0n1 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.860 11:37:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.802 nvme0n1 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.802 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.061 nvme0n1 00:29:32.061 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.061 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.061 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.061 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.061 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.061 11:38:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.061 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.061 11:38:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.061 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.061 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:32.320 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.321 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.580 nvme0n1 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.581 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.840 nvme0n1 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.840 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.099 nvme0n1 00:29:33.099 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.099 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.099 11:38:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.099 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.099 11:38:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:33.099 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.358 nvme0n1 00:29:33.358 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.618 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.878 nvme0n1 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.878 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.879 11:38:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.139 nvme0n1 00:29:34.139 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.139 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.139 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.139 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.139 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.139 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.456 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.735 nvme0n1 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.735 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.736 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.995 nvme0n1 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.995 11:38:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.564 nvme0n1 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.564 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.823 nvme0n1 00:29:35.823 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.823 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.823 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.823 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.823 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.823 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.824 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.083 11:38:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.345 nvme0n1 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.345 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.346 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.920 nvme0n1 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.920 11:38:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.180 nvme0n1 00:29:37.180 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.180 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.180 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.180 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.180 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.180 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.441 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.701 nvme0n1 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.701 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.961 11:38:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.530 nvme0n1 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.530 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.102 nvme0n1 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.102 11:38:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.675 nvme0n1 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.675 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.935 11:38:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.505 nvme0n1 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.505 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.506 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.076 nvme0n1 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRmZmY0Zjc5MjM1ZWEzMmU2ZGE2NDcyMTQyN2UxNjZcB6LI: 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBlNzBkYzBiNTczYzczZjk2ZDAwYzM1OGI3MTE0MWFlYjc3ZjYyZjEzMTUzNzYxYTEzYzI4YmU1MTY2OGY0YlIp68k=: 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.076 11:38:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.018 nvme0n1 00:29:42.018 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.018 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.018 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.019 11:38:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.962 nvme0n1 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDVjOTFjNjVjMTk2NTg0YjAzMWFjNzZmMmMwZDFiM2ZX0p3i: 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWI1ZWZiYzQ2NDk0OTUxN2MxNTViNmIyNzU1MTQ4OWaDbB8A: 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.962 11:38:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.911 nvme0n1 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVmY2ViNzk5N2E4MDI1NzYzYTdmODVjMDllMjg1NTkwOWY0ZDJiYmE1NDQxODk3LkuXyg==: 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI2Y2RjMjBmYzFjZTk0MTE5Zjg5M2NkODVlZTJiZDjjlBhq: 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.911 11:38:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.855 nvme0n1 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWFjYTU0MTNkZjZkYWYwYjAzZDEwNzRkMzNhOWVkZWE0YTI3Yjg0NDliYzg2ZjdmNmQ0YWM4NjJjYjcxZWIwMoHFCdQ=: 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.855 11:38:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.798 nvme0n1 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNiNTdjODc3MTE3YTI2ZGU1NzA1NzAyYmQ0MmRjNzUwZTkyMzM1NDdiYWVhNmQ3OQTrTw==: 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: ]] 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWNiODM4ODhlNjBiYmUwMWU0ZjgxMzA1ZTFjZjIzNmIwMmRlZjQ4NTQyZjQwY2Y5XK/HwQ==: 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.798 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.799 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.060 request: 00:29:46.060 { 00:29:46.060 "name": "nvme0", 00:29:46.060 "trtype": "rdma", 00:29:46.060 "traddr": "192.168.100.8", 00:29:46.060 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.060 "adrfam": "ipv4", 00:29:46.060 "trsvcid": "4420", 00:29:46.060 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.060 "method": "bdev_nvme_attach_controller", 00:29:46.060 "req_id": 1 00:29:46.060 } 00:29:46.060 Got JSON-RPC error response 00:29:46.060 response: 00:29:46.060 { 00:29:46.060 "code": -5, 00:29:46.060 "message": "Input/output error" 00:29:46.060 } 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.060 11:38:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.060 request: 00:29:46.060 { 00:29:46.060 "name": "nvme0", 00:29:46.060 "trtype": "rdma", 00:29:46.060 "traddr": "192.168.100.8", 00:29:46.060 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.060 "adrfam": "ipv4", 00:29:46.060 "trsvcid": "4420", 00:29:46.060 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.060 "dhchap_key": "key2", 00:29:46.060 "method": "bdev_nvme_attach_controller", 00:29:46.060 "req_id": 1 00:29:46.060 } 00:29:46.060 Got JSON-RPC error response 00:29:46.060 response: 00:29:46.060 { 00:29:46.060 "code": -5, 00:29:46.060 "message": "Input/output error" 00:29:46.060 } 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.060 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.322 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.322 request: 00:29:46.322 { 00:29:46.322 "name": "nvme0", 00:29:46.322 "trtype": "rdma", 00:29:46.322 "traddr": "192.168.100.8", 00:29:46.323 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.323 "adrfam": "ipv4", 00:29:46.323 "trsvcid": "4420", 00:29:46.323 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.323 "dhchap_key": "key1", 00:29:46.323 "dhchap_ctrlr_key": "ckey2", 00:29:46.323 "method": "bdev_nvme_attach_controller", 00:29:46.323 "req_id": 1 00:29:46.323 } 00:29:46.323 Got JSON-RPC error response 00:29:46.323 response: 00:29:46.323 { 00:29:46.323 "code": -5, 00:29:46.323 "message": "Input/output error" 00:29:46.323 } 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:46.323 rmmod nvme_rdma 00:29:46.323 rmmod nvme_fabrics 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3769488 ']' 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3769488 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 3769488 ']' 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 3769488 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3769488 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3769488' 00:29:46.323 killing process with pid 3769488 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 3769488 00:29:46.323 11:38:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 3769488 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:29:46.584 11:38:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:29:49.882 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:49.882 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:49.882 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:49.882 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:49.882 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:49.882 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:49.882 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:50.142 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:50.403 11:38:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Gzf /tmp/spdk.key-null.uaw /tmp/spdk.key-sha256.IVc /tmp/spdk.key-sha384.05P /tmp/spdk.key-sha512.bn8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:29:50.403 11:38:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:29:53.763 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:53.763 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:53.763 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:54.024 00:29:54.024 real 1m5.129s 00:29:54.024 user 1m0.236s 00:29:54.024 sys 0m14.940s 00:29:54.024 11:38:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:54.024 11:38:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.024 ************************************ 00:29:54.024 END TEST nvmf_auth_host 00:29:54.024 ************************************ 00:29:54.285 11:38:23 nvmf_rdma -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:29:54.285 11:38:23 nvmf_rdma -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:54.285 11:38:23 nvmf_rdma -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:54.285 11:38:23 nvmf_rdma -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:54.285 11:38:23 nvmf_rdma -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:29:54.286 11:38:23 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:54.286 11:38:23 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:54.286 11:38:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:54.286 ************************************ 00:29:54.286 START TEST nvmf_bdevperf 00:29:54.286 ************************************ 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:29:54.286 * Looking for test storage... 00:29:54.286 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:54.286 11:38:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:30:00.877 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:30:00.877 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:30:00.877 Found net devices under 0000:98:00.0: mlx_0_0 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:30:00.877 Found net devices under 0000:98:00.1: mlx_0_1 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:00.877 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:01.139 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:01.139 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:30:01.139 altname enp152s0f0np0 00:30:01.139 altname ens817f0np0 00:30:01.139 inet 192.168.100.8/24 scope global mlx_0_0 00:30:01.139 valid_lft forever preferred_lft forever 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:01.139 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:01.139 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:30:01.139 altname enp152s0f1np1 00:30:01.139 altname ens817f1np1 00:30:01.139 inet 192.168.100.9/24 scope global mlx_0_1 00:30:01.139 valid_lft forever preferred_lft forever 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:01.139 11:38:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:01.139 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:01.139 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:01.139 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:01.139 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:01.139 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:01.140 192.168.100.9' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:01.140 192.168.100.9' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:01.140 192.168.100.9' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3787450 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3787450 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 3787450 ']' 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:01.140 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.401 [2024-06-10 11:38:30.132513] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:01.401 [2024-06-10 11:38:30.132574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.401 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.401 [2024-06-10 11:38:30.211775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:01.401 [2024-06-10 11:38:30.277198] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.401 [2024-06-10 11:38:30.277234] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.401 [2024-06-10 11:38:30.277243] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.401 [2024-06-10 11:38:30.277250] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.401 [2024-06-10 11:38:30.277256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.401 [2024-06-10 11:38:30.277395] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.401 [2024-06-10 11:38:30.277527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.401 [2024-06-10 11:38:30.277528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.344 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:02.345 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:30:02.345 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:02.345 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:02.345 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.345 11:38:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.345 11:38:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:02.345 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:02.345 11:38:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.345 [2024-06-10 11:38:31.027755] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1360840/0x1364d30) succeed. 00:30:02.345 [2024-06-10 11:38:31.041850] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1361de0/0x13a63c0) succeed. 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.345 Malloc0 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.345 [2024-06-10 11:38:31.193743] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:02.345 { 00:30:02.345 "params": { 00:30:02.345 "name": "Nvme$subsystem", 00:30:02.345 "trtype": "$TEST_TRANSPORT", 00:30:02.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.345 "adrfam": "ipv4", 00:30:02.345 "trsvcid": "$NVMF_PORT", 00:30:02.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.345 "hdgst": ${hdgst:-false}, 00:30:02.345 "ddgst": ${ddgst:-false} 00:30:02.345 }, 00:30:02.345 "method": "bdev_nvme_attach_controller" 00:30:02.345 } 00:30:02.345 EOF 00:30:02.345 )") 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:02.345 11:38:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:02.345 "params": { 00:30:02.345 "name": "Nvme1", 00:30:02.345 "trtype": "rdma", 00:30:02.345 "traddr": "192.168.100.8", 00:30:02.345 "adrfam": "ipv4", 00:30:02.345 "trsvcid": "4420", 00:30:02.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:02.345 "hdgst": false, 00:30:02.345 "ddgst": false 00:30:02.345 }, 00:30:02.345 "method": "bdev_nvme_attach_controller" 00:30:02.345 }' 00:30:02.345 [2024-06-10 11:38:31.245771] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:02.345 [2024-06-10 11:38:31.245817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787516 ] 00:30:02.345 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.345 [2024-06-10 11:38:31.304689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.606 [2024-06-10 11:38:31.368859] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.606 Running I/O for 1 seconds... 00:30:03.991 00:30:03.991 Latency(us) 00:30:03.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.991 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.992 Verification LBA range: start 0x0 length 0x4000 00:30:03.992 Nvme1n1 : 1.01 14536.80 56.78 0.00 0.00 8742.90 709.97 19660.80 00:30:03.992 =================================================================================================================== 00:30:03.992 Total : 14536.80 56.78 0.00 0.00 8742.90 709.97 19660.80 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3787833 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.992 { 00:30:03.992 "params": { 00:30:03.992 "name": "Nvme$subsystem", 00:30:03.992 "trtype": "$TEST_TRANSPORT", 00:30:03.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.992 "adrfam": "ipv4", 00:30:03.992 "trsvcid": "$NVMF_PORT", 00:30:03.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.992 "hdgst": ${hdgst:-false}, 00:30:03.992 "ddgst": ${ddgst:-false} 00:30:03.992 }, 00:30:03.992 "method": "bdev_nvme_attach_controller" 00:30:03.992 } 00:30:03.992 EOF 00:30:03.992 )") 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:03.992 11:38:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:03.992 "params": { 00:30:03.992 "name": "Nvme1", 00:30:03.992 "trtype": "rdma", 00:30:03.992 "traddr": "192.168.100.8", 00:30:03.992 "adrfam": "ipv4", 00:30:03.992 "trsvcid": "4420", 00:30:03.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.992 "hdgst": false, 00:30:03.992 "ddgst": false 00:30:03.992 }, 00:30:03.992 "method": "bdev_nvme_attach_controller" 00:30:03.992 }' 00:30:03.992 [2024-06-10 11:38:32.754578] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:03.992 [2024-06-10 11:38:32.754633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787833 ] 00:30:03.992 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.992 [2024-06-10 11:38:32.813907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.992 [2024-06-10 11:38:32.877357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.253 Running I/O for 15 seconds... 00:30:06.798 11:38:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3787450 00:30:06.798 11:38:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:08.184 [2024-06-10 11:38:36.740101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:30:08.184 [2024-06-10 11:38:36.740144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.184 [2024-06-10 11:38:36.740163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186e00 00:30:08.184 [2024-06-10 11:38:36.740172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x186e00 00:30:08.185 [2024-06-10 11:38:36.740771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.185 [2024-06-10 11:38:36.740780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.740989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.740998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.741005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.741021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.741037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.741053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.741069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.741085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.741103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x186e00 00:30:08.186 [2024-06-10 11:38:36.741119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.186 [2024-06-10 11:38:36.741381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.186 [2024-06-10 11:38:36.741390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.741990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.741997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.187 [2024-06-10 11:38:36.742006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.187 [2024-06-10 11:38:36.742013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.742230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.188 [2024-06-10 11:38:36.742236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:4ee0 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.744543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:08.188 [2024-06-10 11:38:36.744559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:08.188 [2024-06-10 11:38:36.744566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103976 len:8 PRP1 0x0 PRP2 0x0 00:30:08.188 [2024-06-10 11:38:36.744573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.188 [2024-06-10 11:38:36.744606] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:30:08.188 [2024-06-10 11:38:36.748178] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.188 [2024-06-10 11:38:36.768027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:08.188 [2024-06-10 11:38:36.772360] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:08.188 [2024-06-10 11:38:36.772406] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:08.188 [2024-06-10 11:38:36.772423] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:30:09.132 [2024-06-10 11:38:37.776839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:09.132 [2024-06-10 11:38:37.776859] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.132 [2024-06-10 11:38:37.777079] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.132 [2024-06-10 11:38:37.777088] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.132 [2024-06-10 11:38:37.777096] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:30:09.132 [2024-06-10 11:38:37.778150] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.132 [2024-06-10 11:38:37.780619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.132 [2024-06-10 11:38:37.791975] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.132 [2024-06-10 11:38:37.795895] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:09.132 [2024-06-10 11:38:37.795913] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:09.132 [2024-06-10 11:38:37.795919] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:30:10.074 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3787450 Killed "${NVMF_APP[@]}" "$@" 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3789086 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3789086 00:30:10.074 11:38:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 3789086 ']' 00:30:10.075 11:38:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.075 11:38:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:10.075 11:38:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.075 11:38:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:10.075 11:38:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.075 [2024-06-10 11:38:38.747251] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:10.075 [2024-06-10 11:38:38.747289] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.075 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.075 [2024-06-10 11:38:38.800209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:10.075 [2024-06-10 11:38:38.800232] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.075 [2024-06-10 11:38:38.800452] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.075 [2024-06-10 11:38:38.800461] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.075 [2024-06-10 11:38:38.800468] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:30:10.075 [2024-06-10 11:38:38.801725] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.075 [2024-06-10 11:38:38.803999] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.075 [2024-06-10 11:38:38.814761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:10.075 [2024-06-10 11:38:38.815567] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.075 [2024-06-10 11:38:38.819196] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:10.075 [2024-06-10 11:38:38.819215] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:10.075 [2024-06-10 11:38:38.819221] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:30:10.075 [2024-06-10 11:38:38.868588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.075 [2024-06-10 11:38:38.868618] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.075 [2024-06-10 11:38:38.868624] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.075 [2024-06-10 11:38:38.868628] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.075 [2024-06-10 11:38:38.868632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.075 [2024-06-10 11:38:38.868746] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.075 [2024-06-10 11:38:38.868906] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:10.075 [2024-06-10 11:38:38.869022] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:10.646 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.908 [2024-06-10 11:38:39.621365] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x985840/0x989d30) succeed. 00:30:10.908 [2024-06-10 11:38:39.631090] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x986de0/0x9cb3c0) succeed. 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.908 Malloc0 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.908 [2024-06-10 11:38:39.761876] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:10.908 11:38:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3787833 00:30:10.908 [2024-06-10 11:38:39.823735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:10.908 [2024-06-10 11:38:39.823758] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.908 [2024-06-10 11:38:39.823980] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.908 [2024-06-10 11:38:39.823989] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.909 [2024-06-10 11:38:39.823997] nvme_ctrlr.c:1085:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:30:10.909 [2024-06-10 11:38:39.826491] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:10.909 [2024-06-10 11:38:39.827514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:10.909 [2024-06-10 11:38:39.840311] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.170 [2024-06-10 11:38:39.901888] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:19.347 00:30:19.347 Latency(us) 00:30:19.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.347 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:19.347 Verification LBA range: start 0x0 length 0x4000 00:30:19.347 Nvme1n1 : 15.00 12354.90 48.26 7924.31 0.00 6286.66 344.75 1034594.99 00:30:19.347 =================================================================================================================== 00:30:19.347 Total : 12354.90 48.26 7924.31 0.00 6286.66 344.75 1034594.99 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:19.347 rmmod nvme_rdma 00:30:19.347 rmmod nvme_fabrics 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3789086 ']' 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3789086 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 3789086 ']' 00:30:19.347 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 3789086 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3789086 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3789086' 00:30:19.607 killing process with pid 3789086 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 3789086 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 3789086 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:19.607 00:30:19.607 real 0m25.497s 00:30:19.607 user 1m4.255s 00:30:19.607 sys 0m6.096s 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:19.607 11:38:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:19.607 ************************************ 00:30:19.607 END TEST nvmf_bdevperf 00:30:19.607 ************************************ 00:30:19.867 11:38:48 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:30:19.867 11:38:48 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:19.867 11:38:48 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:19.867 11:38:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:19.867 ************************************ 00:30:19.867 START TEST nvmf_target_disconnect 00:30:19.867 ************************************ 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:30:19.867 * Looking for test storage... 00:30:19.867 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:19.867 11:38:48 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:19.868 11:38:48 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:30:28.000 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:30:28.000 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:28.000 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:30:28.001 Found net devices under 0000:98:00.0: mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:30:28.001 Found net devices under 0000:98:00.1: mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:28.001 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:28.001 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:30:28.001 altname enp152s0f0np0 00:30:28.001 altname ens817f0np0 00:30:28.001 inet 192.168.100.8/24 scope global mlx_0_0 00:30:28.001 valid_lft forever preferred_lft forever 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:28.001 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:28.001 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:30:28.001 altname enp152s0f1np1 00:30:28.001 altname ens817f1np1 00:30:28.001 inet 192.168.100.9/24 scope global mlx_0_1 00:30:28.001 valid_lft forever preferred_lft forever 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:28.001 192.168.100.9' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:28.001 192.168.100.9' 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:30:28.001 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:28.002 192.168.100.9' 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:28.002 ************************************ 00:30:28.002 START TEST nvmf_target_disconnect_tc1 00:30:28.002 ************************************ 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:30:28.002 11:38:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:28.002 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.002 [2024-06-10 11:38:55.852575] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:28.002 [2024-06-10 11:38:55.852614] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:28.002 [2024-06-10 11:38:55.852622] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:30:28.002 [2024-06-10 11:38:56.857001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:28.002 [2024-06-10 11:38:56.857052] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:28.002 [2024-06-10 11:38:56.857076] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:30:28.002 [2024-06-10 11:38:56.857129] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:28.002 [2024-06-10 11:38:56.857149] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:28.002 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:30:28.002 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:28.002 Initializing NVMe Controllers 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:28.002 00:30:28.002 real 0m1.119s 00:30:28.002 user 0m0.944s 00:30:28.002 sys 0m0.158s 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:28.002 ************************************ 00:30:28.002 END TEST nvmf_target_disconnect_tc1 00:30:28.002 ************************************ 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:28.002 ************************************ 00:30:28.002 START TEST nvmf_target_disconnect_tc2 00:30:28.002 ************************************ 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3794918 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3794918 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3794918 ']' 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:28.002 11:38:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.263 [2024-06-10 11:38:56.997968] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:28.263 [2024-06-10 11:38:56.998017] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.263 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.263 [2024-06-10 11:38:57.079299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.263 [2024-06-10 11:38:57.173572] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.263 [2024-06-10 11:38:57.173628] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.263 [2024-06-10 11:38:57.173637] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.263 [2024-06-10 11:38:57.173644] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.263 [2024-06-10 11:38:57.173650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.263 [2024-06-10 11:38:57.173854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:30:28.263 [2024-06-10 11:38:57.174045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:30:28.263 [2024-06-10 11:38:57.174206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:30:28.263 [2024-06-10 11:38:57.174206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:30:28.836 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:28.836 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:30:28.836 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:28.836 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:28.836 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.098 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.098 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:29.098 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.098 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.098 Malloc0 00:30:29.098 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.098 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:29.098 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.098 11:38:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.098 [2024-06-10 11:38:57.910147] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2074450/0x207ff60) succeed. 00:30:29.098 [2024-06-10 11:38:57.926073] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2075a90/0x20c15f0) succeed. 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.360 [2024-06-10 11:38:58.112256] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3795207 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:29.360 11:38:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:29.360 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.274 11:39:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3794918 00:30:31.274 11:39:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Read completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 Write completed with error (sct=0, sc=8) 00:30:32.661 starting I/O failed 00:30:32.661 [2024-06-10 11:39:01.317637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.661 [2024-06-10 11:39:01.320142] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:32.661 [2024-06-10 11:39:01.320156] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:32.661 [2024-06-10 11:39:01.320160] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:33.237 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3794918 Killed "${NVMF_APP[@]}" "$@" 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3796057 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3796057 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3796057 ']' 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:33.237 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.237 [2024-06-10 11:39:02.203108] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:33.237 [2024-06-10 11:39:02.203158] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.501 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.501 [2024-06-10 11:39:02.280279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:33.501 [2024-06-10 11:39:02.324462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-06-10 11:39:02.326846] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:33.501 [2024-06-10 11:39:02.326857] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:33.501 [2024-06-10 11:39:02.326862] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:33.501 [2024-06-10 11:39:02.335009] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.501 [2024-06-10 11:39:02.335033] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.501 [2024-06-10 11:39:02.335038] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.501 [2024-06-10 11:39:02.335043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.501 [2024-06-10 11:39:02.335047] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.501 [2024-06-10 11:39:02.335195] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:30:33.501 [2024-06-10 11:39:02.335324] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:30:33.501 [2024-06-10 11:39:02.335476] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:30:33.501 [2024-06-10 11:39:02.335479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:30:34.073 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:34.073 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:30:34.073 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:34.073 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:34.073 11:39:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.073 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.073 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:34.073 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.073 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.073 Malloc0 00:30:34.073 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.073 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:34.073 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.073 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.334 [2024-06-10 11:39:03.059839] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22dd450/0x22e8f60) succeed. 00:30:34.334 [2024-06-10 11:39:03.071679] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22dea90/0x232a5f0) succeed. 00:30:34.334 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.334 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:34.334 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.335 [2024-06-10 11:39:03.203173] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.335 11:39:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3795207 00:30:34.597 [2024-06-10 11:39:03.331486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.344036] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.344086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.344098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.344107] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.344112] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.353413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.364292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.364325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.364335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.364340] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.364345] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.373480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.384426] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.384459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.384479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.384485] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.384490] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.393479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.403889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.403920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.403931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.403937] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.403941] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.413394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.424165] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.424197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.424207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.424212] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.424217] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.433592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.444028] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.444065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.444076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.444080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.444085] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.453954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.463744] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.463776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.463787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.463791] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.463796] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.473948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.484175] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.484204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.484214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.484219] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.484223] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.493604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.503641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.503672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.503681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.503686] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.503691] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.513741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.524033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.524061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.524074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.524079] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.524083] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.533799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.544134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.597 [2024-06-10 11:39:03.544166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.597 [2024-06-10 11:39:03.544176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.597 [2024-06-10 11:39:03.544180] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.597 [2024-06-10 11:39:03.544184] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.597 [2024-06-10 11:39:03.553548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.597 qpair failed and we were unable to recover it. 00:30:34.597 [2024-06-10 11:39:03.564274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.598 [2024-06-10 11:39:03.564301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.598 [2024-06-10 11:39:03.564310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.598 [2024-06-10 11:39:03.564315] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.598 [2024-06-10 11:39:03.564320] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.858 [2024-06-10 11:39:03.573867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.858 qpair failed and we were unable to recover it. 00:30:34.858 [2024-06-10 11:39:03.584808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.858 [2024-06-10 11:39:03.584837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.858 [2024-06-10 11:39:03.584846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.858 [2024-06-10 11:39:03.584851] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.858 [2024-06-10 11:39:03.584855] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.858 [2024-06-10 11:39:03.593850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.858 qpair failed and we were unable to recover it. 00:30:34.858 [2024-06-10 11:39:03.604736] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.858 [2024-06-10 11:39:03.604771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.858 [2024-06-10 11:39:03.604791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.858 [2024-06-10 11:39:03.604797] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.858 [2024-06-10 11:39:03.604805] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.858 [2024-06-10 11:39:03.614321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.858 qpair failed and we were unable to recover it. 00:30:34.858 [2024-06-10 11:39:03.624840] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.858 [2024-06-10 11:39:03.624865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.858 [2024-06-10 11:39:03.624876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.858 [2024-06-10 11:39:03.624881] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.858 [2024-06-10 11:39:03.624886] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.858 [2024-06-10 11:39:03.634239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.858 qpair failed and we were unable to recover it. 00:30:34.858 [2024-06-10 11:39:03.644383] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.858 [2024-06-10 11:39:03.644413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.858 [2024-06-10 11:39:03.644423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.858 [2024-06-10 11:39:03.644428] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.858 [2024-06-10 11:39:03.644432] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.858 [2024-06-10 11:39:03.654042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.858 qpair failed and we were unable to recover it. 00:30:34.858 [2024-06-10 11:39:03.664901] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.858 [2024-06-10 11:39:03.664935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.858 [2024-06-10 11:39:03.664945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.858 [2024-06-10 11:39:03.664950] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.858 [2024-06-10 11:39:03.664954] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.858 [2024-06-10 11:39:03.674223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.858 qpair failed and we were unable to recover it. 00:30:34.858 [2024-06-10 11:39:03.684845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.858 [2024-06-10 11:39:03.684870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.858 [2024-06-10 11:39:03.684879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.859 [2024-06-10 11:39:03.684884] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.859 [2024-06-10 11:39:03.684888] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.859 [2024-06-10 11:39:03.694469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.859 qpair failed and we were unable to recover it. 00:30:34.859 [2024-06-10 11:39:03.704877] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.859 [2024-06-10 11:39:03.704906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.859 [2024-06-10 11:39:03.704916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.859 [2024-06-10 11:39:03.704920] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.859 [2024-06-10 11:39:03.704925] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.859 [2024-06-10 11:39:03.714370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.859 qpair failed and we were unable to recover it. 00:30:34.859 [2024-06-10 11:39:03.724884] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.859 [2024-06-10 11:39:03.724915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.859 [2024-06-10 11:39:03.724934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.859 [2024-06-10 11:39:03.724940] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.859 [2024-06-10 11:39:03.724945] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.859 [2024-06-10 11:39:03.734321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.859 qpair failed and we were unable to recover it. 00:30:34.859 [2024-06-10 11:39:03.745241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.859 [2024-06-10 11:39:03.745275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.859 [2024-06-10 11:39:03.745295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.859 [2024-06-10 11:39:03.745300] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.859 [2024-06-10 11:39:03.745305] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.859 [2024-06-10 11:39:03.754571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.859 qpair failed and we were unable to recover it. 00:30:34.859 [2024-06-10 11:39:03.765046] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.859 [2024-06-10 11:39:03.765078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.859 [2024-06-10 11:39:03.765089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.859 [2024-06-10 11:39:03.765094] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.859 [2024-06-10 11:39:03.765098] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.859 [2024-06-10 11:39:03.774437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.859 qpair failed and we were unable to recover it. 00:30:34.859 [2024-06-10 11:39:03.785252] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.859 [2024-06-10 11:39:03.785282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.859 [2024-06-10 11:39:03.785292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.859 [2024-06-10 11:39:03.785299] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.859 [2024-06-10 11:39:03.785304] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.859 [2024-06-10 11:39:03.794515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.859 qpair failed and we were unable to recover it. 00:30:34.859 [2024-06-10 11:39:03.804872] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.859 [2024-06-10 11:39:03.804900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.859 [2024-06-10 11:39:03.804910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.859 [2024-06-10 11:39:03.804915] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.859 [2024-06-10 11:39:03.804919] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:34.859 [2024-06-10 11:39:03.814571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.859 qpair failed and we were unable to recover it. 00:30:34.859 [2024-06-10 11:39:03.824745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.859 [2024-06-10 11:39:03.824777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.859 [2024-06-10 11:39:03.824786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.859 [2024-06-10 11:39:03.824791] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.859 [2024-06-10 11:39:03.824795] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.834640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:03.845330] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:03.845360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:03.845370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:03.845375] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:03.845379] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.854747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:03.865626] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:03.865653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:03.865673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:03.865679] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:03.865683] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.874882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:03.885235] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:03.885265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:03.885284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:03.885290] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:03.885295] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.894839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:03.905462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:03.905496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:03.905507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:03.905512] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:03.905516] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.914860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:03.925674] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:03.925701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:03.925710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:03.925715] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:03.925719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.935043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:03.945475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:03.945503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:03.945512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:03.945517] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:03.945521] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.954944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:03.965636] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:03.965667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:03.965690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:03.965696] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:03.965701] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.975038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:03.985787] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:03.985823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:03.985835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:03.985840] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:03.985844] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:03.995201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:04.005475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:04.005503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:04.005512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:04.005517] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:04.005522] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:04.015186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:04.025604] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:04.025635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:04.025644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:04.025649] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:04.025653] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.119 [2024-06-10 11:39:04.035175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.119 qpair failed and we were unable to recover it. 00:30:35.119 [2024-06-10 11:39:04.045694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.119 [2024-06-10 11:39:04.045727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.119 [2024-06-10 11:39:04.045736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.119 [2024-06-10 11:39:04.045741] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.119 [2024-06-10 11:39:04.045748] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.120 [2024-06-10 11:39:04.055469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.120 qpair failed and we were unable to recover it. 00:30:35.120 [2024-06-10 11:39:04.065863] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.120 [2024-06-10 11:39:04.065894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.120 [2024-06-10 11:39:04.065903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.120 [2024-06-10 11:39:04.065908] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.120 [2024-06-10 11:39:04.065912] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.120 [2024-06-10 11:39:04.075460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.120 qpair failed and we were unable to recover it. 00:30:35.120 [2024-06-10 11:39:04.085732] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.120 [2024-06-10 11:39:04.085756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.120 [2024-06-10 11:39:04.085768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.120 [2024-06-10 11:39:04.085773] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.120 [2024-06-10 11:39:04.085778] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.382 [2024-06-10 11:39:04.095667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.382 qpair failed and we were unable to recover it. 00:30:35.382 [2024-06-10 11:39:04.105817] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.382 [2024-06-10 11:39:04.105845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.382 [2024-06-10 11:39:04.105854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.382 [2024-06-10 11:39:04.105859] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.382 [2024-06-10 11:39:04.105863] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.382 [2024-06-10 11:39:04.115501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.382 qpair failed and we were unable to recover it. 00:30:35.382 [2024-06-10 11:39:04.125784] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.382 [2024-06-10 11:39:04.125809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.382 [2024-06-10 11:39:04.125819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.382 [2024-06-10 11:39:04.125823] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.382 [2024-06-10 11:39:04.125828] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.382 [2024-06-10 11:39:04.135284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.382 qpair failed and we were unable to recover it. 00:30:35.382 [2024-06-10 11:39:04.146021] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.382 [2024-06-10 11:39:04.146057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.382 [2024-06-10 11:39:04.146066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.382 [2024-06-10 11:39:04.146071] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.382 [2024-06-10 11:39:04.146075] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.382 [2024-06-10 11:39:04.155566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.382 qpair failed and we were unable to recover it. 00:30:35.382 [2024-06-10 11:39:04.166343] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.382 [2024-06-10 11:39:04.166375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.382 [2024-06-10 11:39:04.166384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.382 [2024-06-10 11:39:04.166389] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.382 [2024-06-10 11:39:04.166393] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.382 [2024-06-10 11:39:04.175656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.382 qpair failed and we were unable to recover it. 00:30:35.382 [2024-06-10 11:39:04.186493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.382 [2024-06-10 11:39:04.186528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.382 [2024-06-10 11:39:04.186548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.382 [2024-06-10 11:39:04.186554] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.382 [2024-06-10 11:39:04.186558] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.382 [2024-06-10 11:39:04.195742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.382 qpair failed and we were unable to recover it. 00:30:35.382 [2024-06-10 11:39:04.206236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.382 [2024-06-10 11:39:04.206267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.382 [2024-06-10 11:39:04.206287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.383 [2024-06-10 11:39:04.206292] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.383 [2024-06-10 11:39:04.206297] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.383 [2024-06-10 11:39:04.215734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.383 qpair failed and we were unable to recover it. 00:30:35.383 [2024-06-10 11:39:04.226504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.383 [2024-06-10 11:39:04.226536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.383 [2024-06-10 11:39:04.226556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.383 [2024-06-10 11:39:04.226565] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.383 [2024-06-10 11:39:04.226570] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.383 [2024-06-10 11:39:04.235693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.383 qpair failed and we were unable to recover it. 00:30:35.383 [2024-06-10 11:39:04.246612] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.383 [2024-06-10 11:39:04.246647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.383 [2024-06-10 11:39:04.246666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.383 [2024-06-10 11:39:04.246672] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.383 [2024-06-10 11:39:04.246677] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.383 [2024-06-10 11:39:04.255865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.383 qpair failed and we were unable to recover it. 00:30:35.383 [2024-06-10 11:39:04.266695] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.383 [2024-06-10 11:39:04.266723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.383 [2024-06-10 11:39:04.266733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.383 [2024-06-10 11:39:04.266738] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.383 [2024-06-10 11:39:04.266743] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.383 [2024-06-10 11:39:04.275887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.383 qpair failed and we were unable to recover it. 00:30:35.383 [2024-06-10 11:39:04.286241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.383 [2024-06-10 11:39:04.286267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.383 [2024-06-10 11:39:04.286277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.383 [2024-06-10 11:39:04.286282] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.383 [2024-06-10 11:39:04.286286] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.383 [2024-06-10 11:39:04.296108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.383 qpair failed and we were unable to recover it. 00:30:35.383 [2024-06-10 11:39:04.307298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.383 [2024-06-10 11:39:04.307334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.383 [2024-06-10 11:39:04.307353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.383 [2024-06-10 11:39:04.307360] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.383 [2024-06-10 11:39:04.307366] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.383 [2024-06-10 11:39:04.315987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.383 qpair failed and we were unable to recover it. 00:30:35.383 [2024-06-10 11:39:04.326787] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.383 [2024-06-10 11:39:04.326820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.383 [2024-06-10 11:39:04.326831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.383 [2024-06-10 11:39:04.326836] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.383 [2024-06-10 11:39:04.326840] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.383 [2024-06-10 11:39:04.336350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.383 qpair failed and we were unable to recover it. 00:30:35.383 [2024-06-10 11:39:04.346684] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.383 [2024-06-10 11:39:04.346714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.383 [2024-06-10 11:39:04.346724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.383 [2024-06-10 11:39:04.346730] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.383 [2024-06-10 11:39:04.346734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.645 [2024-06-10 11:39:04.356224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.645 qpair failed and we were unable to recover it. 00:30:35.645 [2024-06-10 11:39:04.366547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.645 [2024-06-10 11:39:04.366575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.645 [2024-06-10 11:39:04.366584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.645 [2024-06-10 11:39:04.366589] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.645 [2024-06-10 11:39:04.366594] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.645 [2024-06-10 11:39:04.376169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.645 qpair failed and we were unable to recover it. 00:30:35.645 [2024-06-10 11:39:04.386935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.645 [2024-06-10 11:39:04.386966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.645 [2024-06-10 11:39:04.386986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.645 [2024-06-10 11:39:04.386991] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.645 [2024-06-10 11:39:04.386996] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.645 [2024-06-10 11:39:04.396508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.645 qpair failed and we were unable to recover it. 00:30:35.645 [2024-06-10 11:39:04.407140] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.645 [2024-06-10 11:39:04.407175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.407189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.407194] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.407198] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.416538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.426980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.427005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.427015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.427020] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.427024] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.436510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.446848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.446874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.446884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.446889] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.446893] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.456526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.467062] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.467094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.467104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.467108] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.467112] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.476647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.487159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.487189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.487199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.487204] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.487211] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.496677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.507154] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.507188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.507198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.507203] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.507207] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.516190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.526397] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.526422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.526431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.526436] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.526440] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.536675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.547158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.547190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.547199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.547204] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.547208] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.556734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.567474] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.567511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.567530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.567536] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.567540] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.577018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.587534] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.587568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.587579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.587584] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.587588] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.646 [2024-06-10 11:39:04.597065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.646 qpair failed and we were unable to recover it. 00:30:35.646 [2024-06-10 11:39:04.607143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.646 [2024-06-10 11:39:04.607173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.646 [2024-06-10 11:39:04.607183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.646 [2024-06-10 11:39:04.607188] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.646 [2024-06-10 11:39:04.607192] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.908 [2024-06-10 11:39:04.616994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.908 qpair failed and we were unable to recover it. 00:30:35.908 [2024-06-10 11:39:04.627301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.908 [2024-06-10 11:39:04.627340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.908 [2024-06-10 11:39:04.627349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.908 [2024-06-10 11:39:04.627354] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.908 [2024-06-10 11:39:04.627359] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.908 [2024-06-10 11:39:04.636927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.908 qpair failed and we were unable to recover it. 00:30:35.908 [2024-06-10 11:39:04.647858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.908 [2024-06-10 11:39:04.647894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.908 [2024-06-10 11:39:04.647914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.908 [2024-06-10 11:39:04.647920] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.908 [2024-06-10 11:39:04.647924] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.908 [2024-06-10 11:39:04.657516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.908 qpair failed and we were unable to recover it. 00:30:35.908 [2024-06-10 11:39:04.667788] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.908 [2024-06-10 11:39:04.667825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.908 [2024-06-10 11:39:04.667836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.908 [2024-06-10 11:39:04.667844] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.908 [2024-06-10 11:39:04.667848] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.908 [2024-06-10 11:39:04.677144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.908 qpair failed and we were unable to recover it. 00:30:35.908 [2024-06-10 11:39:04.687471] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.687501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.687512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.687516] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.687521] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.697153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.707932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.707970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.707979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.707984] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.707988] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.717210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.727865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.727897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.727907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.727911] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.727915] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.737321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.748024] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.748050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.748059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.748063] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.748067] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.757521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.767753] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.767785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.767795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.767799] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.767804] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.777519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.788193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.788228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.788237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.788242] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.788246] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.797435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.808179] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.808204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.808213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.808218] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.808222] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.817453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.828246] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.828280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.828290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.828294] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.828299] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.837678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.847874] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.847901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.847913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.847918] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.847922] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.857667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:35.909 [2024-06-10 11:39:04.868075] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.909 [2024-06-10 11:39:04.868110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.909 [2024-06-10 11:39:04.868120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.909 [2024-06-10 11:39:04.868125] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.909 [2024-06-10 11:39:04.868129] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:35.909 [2024-06-10 11:39:04.877852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.909 qpair failed and we were unable to recover it. 00:30:36.171 [2024-06-10 11:39:04.888559] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.171 [2024-06-10 11:39:04.888591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.171 [2024-06-10 11:39:04.888611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.171 [2024-06-10 11:39:04.888617] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.171 [2024-06-10 11:39:04.888622] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.171 [2024-06-10 11:39:04.897756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.171 qpair failed and we were unable to recover it. 00:30:36.171 [2024-06-10 11:39:04.908572] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.171 [2024-06-10 11:39:04.908599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.171 [2024-06-10 11:39:04.908610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.171 [2024-06-10 11:39:04.908614] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.171 [2024-06-10 11:39:04.908619] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.171 [2024-06-10 11:39:04.918084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.171 qpair failed and we were unable to recover it. 00:30:36.171 [2024-06-10 11:39:04.928274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.171 [2024-06-10 11:39:04.928302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.171 [2024-06-10 11:39:04.928312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.171 [2024-06-10 11:39:04.928317] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.171 [2024-06-10 11:39:04.928324] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.171 [2024-06-10 11:39:04.937698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.171 qpair failed and we were unable to recover it. 00:30:36.171 [2024-06-10 11:39:04.948518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.171 [2024-06-10 11:39:04.948550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.171 [2024-06-10 11:39:04.948560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:04.948565] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:04.948569] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:04.957896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:04.968715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:04.968749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:04.968759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:04.968775] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:04.968780] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:04.977844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:04.988715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:04.988750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:04.988759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:04.988767] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:04.988771] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:04.997904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:05.007967] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:05.007993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:05.008003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:05.008008] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:05.008012] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:05.017975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:05.027980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:05.028015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:05.028025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:05.028029] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:05.028033] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:05.038351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:05.047893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:05.047919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:05.047929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:05.047933] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:05.047937] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:05.057982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:05.068775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:05.068802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:05.068811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:05.068816] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:05.068820] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:05.077985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:05.087996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:05.088023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:05.088033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:05.088037] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:05.088042] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:05.098271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:05.108581] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:05.108618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:05.108627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:05.108634] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:05.108638] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:05.118617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.172 [2024-06-10 11:39:05.129170] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.172 [2024-06-10 11:39:05.129200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.172 [2024-06-10 11:39:05.129209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.172 [2024-06-10 11:39:05.129214] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.172 [2024-06-10 11:39:05.129218] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.172 [2024-06-10 11:39:05.138537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.172 qpair failed and we were unable to recover it. 00:30:36.433 [2024-06-10 11:39:05.148886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.433 [2024-06-10 11:39:05.148915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.433 [2024-06-10 11:39:05.148924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.433 [2024-06-10 11:39:05.148929] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.433 [2024-06-10 11:39:05.148933] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.433 [2024-06-10 11:39:05.158643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.433 qpair failed and we were unable to recover it. 00:30:36.433 [2024-06-10 11:39:05.168856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.433 [2024-06-10 11:39:05.168883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.433 [2024-06-10 11:39:05.168893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.433 [2024-06-10 11:39:05.168898] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.433 [2024-06-10 11:39:05.168902] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.433 [2024-06-10 11:39:05.178512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.433 qpair failed and we were unable to recover it. 00:30:36.433 [2024-06-10 11:39:05.189466] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.189496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.189505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.189510] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.189514] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.198516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.209025] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.209058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.209079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.209085] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.209090] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.218691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.229484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.229518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.229529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.229534] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.229538] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.238989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.249409] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.249435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.249445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.249449] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.249454] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.258535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.269551] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.269580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.269589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.269594] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.269598] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.278940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.289462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.289490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.289502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.289507] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.289511] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.298945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.309329] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.309356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.309366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.309371] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.309375] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.319080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.329197] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.329223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.329232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.329237] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.329241] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.339038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.349876] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.349905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.349914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.349919] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.349923] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.359275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.369109] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.369140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.369149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.369154] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.369161] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.379325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.434 [2024-06-10 11:39:05.390283] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.434 [2024-06-10 11:39:05.390314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.434 [2024-06-10 11:39:05.390323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.434 [2024-06-10 11:39:05.390328] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.434 [2024-06-10 11:39:05.390332] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.434 [2024-06-10 11:39:05.399184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.434 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.409468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.409493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.409503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.409507] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.409512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.419447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.429979] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.430009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.430019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.430024] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.430028] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.439249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.449977] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.450011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.450020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.450025] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.450029] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.459368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.470185] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.470212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.470221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.470226] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.470231] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.479590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.489848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.489875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.489885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.489889] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.489893] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.499486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.510373] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.510409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.510418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.510423] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.510427] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.519575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.530072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.530100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.530109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.530114] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.530118] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.539667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.549912] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.549947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.549956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.549963] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.549967] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.559804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-06-10 11:39:05.570016] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.696 [2024-06-10 11:39:05.570045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.696 [2024-06-10 11:39:05.570054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.696 [2024-06-10 11:39:05.570059] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.696 [2024-06-10 11:39:05.570063] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.696 [2024-06-10 11:39:05.579806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.697 [2024-06-10 11:39:05.590692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.697 [2024-06-10 11:39:05.590725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.697 [2024-06-10 11:39:05.590734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.697 [2024-06-10 11:39:05.590740] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.697 [2024-06-10 11:39:05.590744] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.697 [2024-06-10 11:39:05.599939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-06-10 11:39:05.610797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.697 [2024-06-10 11:39:05.610822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.697 [2024-06-10 11:39:05.610832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.697 [2024-06-10 11:39:05.610837] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.697 [2024-06-10 11:39:05.610841] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.697 [2024-06-10 11:39:05.619715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-06-10 11:39:05.630517] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.697 [2024-06-10 11:39:05.630551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.697 [2024-06-10 11:39:05.630561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.697 [2024-06-10 11:39:05.630565] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.697 [2024-06-10 11:39:05.630570] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.697 [2024-06-10 11:39:05.640078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-06-10 11:39:05.650466] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.697 [2024-06-10 11:39:05.650495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.697 [2024-06-10 11:39:05.650504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.697 [2024-06-10 11:39:05.650508] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.697 [2024-06-10 11:39:05.650512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.697 [2024-06-10 11:39:05.660090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.958 [2024-06-10 11:39:05.670479] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.958 [2024-06-10 11:39:05.670507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.958 [2024-06-10 11:39:05.670517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.958 [2024-06-10 11:39:05.670521] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.958 [2024-06-10 11:39:05.670525] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.958 [2024-06-10 11:39:05.680115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-06-10 11:39:05.690856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.958 [2024-06-10 11:39:05.690882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.958 [2024-06-10 11:39:05.690892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.958 [2024-06-10 11:39:05.690897] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.958 [2024-06-10 11:39:05.690901] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.958 [2024-06-10 11:39:05.700114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-06-10 11:39:05.710842] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.958 [2024-06-10 11:39:05.710871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.958 [2024-06-10 11:39:05.710881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.958 [2024-06-10 11:39:05.710885] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.958 [2024-06-10 11:39:05.710889] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.958 [2024-06-10 11:39:05.720058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-06-10 11:39:05.730409] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.958 [2024-06-10 11:39:05.730435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.958 [2024-06-10 11:39:05.730447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.958 [2024-06-10 11:39:05.730451] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.958 [2024-06-10 11:39:05.730455] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.958 [2024-06-10 11:39:05.740462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-06-10 11:39:05.751142] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.958 [2024-06-10 11:39:05.751173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.958 [2024-06-10 11:39:05.751193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.958 [2024-06-10 11:39:05.751199] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.958 [2024-06-10 11:39:05.751203] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.958 [2024-06-10 11:39:05.760570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-06-10 11:39:05.770934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.958 [2024-06-10 11:39:05.770960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.958 [2024-06-10 11:39:05.770970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.958 [2024-06-10 11:39:05.770975] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.958 [2024-06-10 11:39:05.770979] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.958 [2024-06-10 11:39:05.780463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-06-10 11:39:05.790755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.958 [2024-06-10 11:39:05.790790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.958 [2024-06-10 11:39:05.790809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.958 [2024-06-10 11:39:05.790815] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.958 [2024-06-10 11:39:05.790820] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.958 [2024-06-10 11:39:05.800605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.958 qpair failed and we were unable to recover it. 00:30:36.958 [2024-06-10 11:39:05.810219] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.959 [2024-06-10 11:39:05.810246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.959 [2024-06-10 11:39:05.810257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.959 [2024-06-10 11:39:05.810262] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.959 [2024-06-10 11:39:05.810269] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.959 [2024-06-10 11:39:05.820529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-06-10 11:39:05.831190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.959 [2024-06-10 11:39:05.831229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.959 [2024-06-10 11:39:05.831248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.959 [2024-06-10 11:39:05.831254] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.959 [2024-06-10 11:39:05.831259] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.959 [2024-06-10 11:39:05.840773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-06-10 11:39:05.851309] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.959 [2024-06-10 11:39:05.851337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.959 [2024-06-10 11:39:05.851348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.959 [2024-06-10 11:39:05.851353] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.959 [2024-06-10 11:39:05.851357] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.959 [2024-06-10 11:39:05.860556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-06-10 11:39:05.871281] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.959 [2024-06-10 11:39:05.871309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.959 [2024-06-10 11:39:05.871319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.959 [2024-06-10 11:39:05.871324] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.959 [2024-06-10 11:39:05.871328] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.959 [2024-06-10 11:39:05.880623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-06-10 11:39:05.890830] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.959 [2024-06-10 11:39:05.890858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.959 [2024-06-10 11:39:05.890868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.959 [2024-06-10 11:39:05.890873] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.959 [2024-06-10 11:39:05.890877] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.959 [2024-06-10 11:39:05.900710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.959 qpair failed and we were unable to recover it. 00:30:36.959 [2024-06-10 11:39:05.911178] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.959 [2024-06-10 11:39:05.911209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.959 [2024-06-10 11:39:05.911219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.959 [2024-06-10 11:39:05.911223] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.959 [2024-06-10 11:39:05.911228] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:36.959 [2024-06-10 11:39:05.920699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.959 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:05.931168] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:05.931195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:05.931205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:05.931209] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:05.931213] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:05.940767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:05.951431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:05.951456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:05.951466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:05.951470] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:05.951475] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:05.961229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:05.971191] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:05.971217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:05.971227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:05.971231] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:05.971236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:05.980922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:05.991442] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:05.991473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:05.991482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:05.991489] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:05.991493] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:06.001212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:06.011578] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:06.011603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:06.011613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:06.011617] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:06.011622] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:06.021050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:06.031792] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:06.031824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:06.031834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:06.031839] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:06.031843] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:06.041333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:06.051535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:06.051563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:06.051572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:06.051577] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:06.051581] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:06.061227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:06.071858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:06.071889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:06.071899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:06.071904] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:06.071908] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:06.081171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:06.091413] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:06.091444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:06.091454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:06.091458] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:06.091463] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:06.101206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:06.111452] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:06.111479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.221 [2024-06-10 11:39:06.111488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.221 [2024-06-10 11:39:06.111493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.221 [2024-06-10 11:39:06.111497] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.221 [2024-06-10 11:39:06.121074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.221 qpair failed and we were unable to recover it. 00:30:37.221 [2024-06-10 11:39:06.131650] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.221 [2024-06-10 11:39:06.131678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.222 [2024-06-10 11:39:06.131687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.222 [2024-06-10 11:39:06.131692] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.222 [2024-06-10 11:39:06.131698] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.222 [2024-06-10 11:39:06.141049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.222 qpair failed and we were unable to recover it. 00:30:37.222 [2024-06-10 11:39:06.151983] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.222 [2024-06-10 11:39:06.152018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.222 [2024-06-10 11:39:06.152028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.222 [2024-06-10 11:39:06.152033] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.222 [2024-06-10 11:39:06.152037] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.222 [2024-06-10 11:39:06.161633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.222 qpair failed and we were unable to recover it. 00:30:37.222 [2024-06-10 11:39:06.171588] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.222 [2024-06-10 11:39:06.171617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.222 [2024-06-10 11:39:06.171629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.222 [2024-06-10 11:39:06.171633] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.222 [2024-06-10 11:39:06.171637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.222 [2024-06-10 11:39:06.181459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.222 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.192332] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.192358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.192367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.192372] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.192376] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.201383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.211658] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.211685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.211694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.211699] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.211703] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.221396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.232176] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.232210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.232219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.232224] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.232228] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.241437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.252212] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.252237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.252246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.252251] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.252258] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.261525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.272184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.272213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.272223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.272227] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.272232] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.281735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.292147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.292174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.292183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.292188] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.292192] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.301663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.312262] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.312296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.312305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.312310] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.312314] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.321485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.332175] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.332206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.332215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.332220] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.332224] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.341603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.352309] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.352336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.494 [2024-06-10 11:39:06.352345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.494 [2024-06-10 11:39:06.352350] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.494 [2024-06-10 11:39:06.352354] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.494 [2024-06-10 11:39:06.361947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-06-10 11:39:06.372328] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.494 [2024-06-10 11:39:06.372356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-06-10 11:39:06.372365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-06-10 11:39:06.372370] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-06-10 11:39:06.372374] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.495 [2024-06-10 11:39:06.381860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-06-10 11:39:06.392497] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-06-10 11:39:06.392533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-06-10 11:39:06.392543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-06-10 11:39:06.392547] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-06-10 11:39:06.392551] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.495 [2024-06-10 11:39:06.401796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-06-10 11:39:06.412663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-06-10 11:39:06.412694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-06-10 11:39:06.412703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-06-10 11:39:06.412708] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-06-10 11:39:06.412712] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.495 [2024-06-10 11:39:06.422214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-06-10 11:39:06.432203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-06-10 11:39:06.432234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-06-10 11:39:06.432246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-06-10 11:39:06.432251] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-06-10 11:39:06.432255] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.495 [2024-06-10 11:39:06.441790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-06-10 11:39:06.451738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-06-10 11:39:06.451778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-06-10 11:39:06.451788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-06-10 11:39:06.451793] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-06-10 11:39:06.451797] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.812 [2024-06-10 11:39:06.462173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-06-10 11:39:06.471949] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.812 [2024-06-10 11:39:06.471980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.812 [2024-06-10 11:39:06.471989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.812 [2024-06-10 11:39:06.471994] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.812 [2024-06-10 11:39:06.471998] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.812 [2024-06-10 11:39:06.482214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-06-10 11:39:06.492676] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.812 [2024-06-10 11:39:06.492704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.812 [2024-06-10 11:39:06.492713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.812 [2024-06-10 11:39:06.492718] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.812 [2024-06-10 11:39:06.492722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.812 [2024-06-10 11:39:06.502078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-06-10 11:39:06.512942] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.812 [2024-06-10 11:39:06.512972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.812 [2024-06-10 11:39:06.512981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.812 [2024-06-10 11:39:06.512986] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.812 [2024-06-10 11:39:06.512990] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.812 [2024-06-10 11:39:06.522467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-06-10 11:39:06.532854] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.812 [2024-06-10 11:39:06.532883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.812 [2024-06-10 11:39:06.532893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.812 [2024-06-10 11:39:06.532897] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.812 [2024-06-10 11:39:06.532902] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.542308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.552729] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.552761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.552774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.552781] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.552785] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.562621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.573161] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.573190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.573200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.573204] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.573209] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.582058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.592264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.592289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.592298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.592303] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.592307] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.602459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.612975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.613002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.613014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.613019] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.613023] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.622533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.633220] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.633255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.633264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.633268] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.633272] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.642690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.653369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.653401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.653410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.653414] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.653418] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.662497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.673294] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.673323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.673332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.673337] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.673341] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.682705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.693327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.693360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.693380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.693385] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.693393] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.702657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.713523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.713554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.713565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.713570] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.713574] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.722811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.733078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.733108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.733118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.733122] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.733127] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.742790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.753480] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.753510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.753520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.753525] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.753529] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.762814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-06-10 11:39:06.773405] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.813 [2024-06-10 11:39:06.773433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.813 [2024-06-10 11:39:06.773442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.813 [2024-06-10 11:39:06.773447] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.813 [2024-06-10 11:39:06.773451] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:37.813 [2024-06-10 11:39:06.783091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.813 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.793628] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.793655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.793665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.793670] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.793675] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.802986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.813649] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.813676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.813685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.813690] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.813694] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.822995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.833840] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.833867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.833876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.833881] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.833885] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.843265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.853609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.853635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.853645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.853650] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.853654] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.863210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.873749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.873786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.873799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.873804] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.873808] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.883392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.893733] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.893760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.893773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.893778] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.893782] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.903363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.913450] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.913476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.913485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.913490] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.913494] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.923305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.933810] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.933837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.933846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.933851] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.933855] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.943365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.954273] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.954309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.954319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.954323] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.954327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.963493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.973670] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.973701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.973711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.973715] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.973719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:06.983630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:06.994251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.076 [2024-06-10 11:39:06.994284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.076 [2024-06-10 11:39:06.994293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.076 [2024-06-10 11:39:06.994298] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.076 [2024-06-10 11:39:06.994302] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.076 [2024-06-10 11:39:07.003507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.076 qpair failed and we were unable to recover it. 00:30:38.076 [2024-06-10 11:39:07.014022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.077 [2024-06-10 11:39:07.014050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.077 [2024-06-10 11:39:07.014060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.077 [2024-06-10 11:39:07.014064] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.077 [2024-06-10 11:39:07.014068] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.077 [2024-06-10 11:39:07.023858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.077 qpair failed and we were unable to recover it. 00:30:38.077 [2024-06-10 11:39:07.034442] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.077 [2024-06-10 11:39:07.034477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.077 [2024-06-10 11:39:07.034487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.077 [2024-06-10 11:39:07.034491] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.077 [2024-06-10 11:39:07.034495] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.077 [2024-06-10 11:39:07.043776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.077 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.054649] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.054679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.054691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.054695] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.054699] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.064012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.074560] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.074589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.074599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.074603] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.074607] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.083979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.094299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.094325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.094334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.094339] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.094343] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.103957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.114434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.114466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.114476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.114480] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.114485] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.124006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.134418] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.134444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.134454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.134458] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.134465] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.144198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.154189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.154219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.154228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.154232] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.154236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.163739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.174004] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.174030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.174039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.174044] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.174048] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.184200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.194713] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.194751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.194775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.194781] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.194786] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.204379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.214493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.214523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.214534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.214538] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.214543] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.224365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.234404] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.338 [2024-06-10 11:39:07.234437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.338 [2024-06-10 11:39:07.234456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.338 [2024-06-10 11:39:07.234462] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.338 [2024-06-10 11:39:07.234466] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.338 [2024-06-10 11:39:07.244502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.338 qpair failed and we were unable to recover it. 00:30:38.338 [2024-06-10 11:39:07.254159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.339 [2024-06-10 11:39:07.254187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.339 [2024-06-10 11:39:07.254198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.339 [2024-06-10 11:39:07.254203] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.339 [2024-06-10 11:39:07.254207] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.339 [2024-06-10 11:39:07.264271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-06-10 11:39:07.274996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.339 [2024-06-10 11:39:07.275025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.339 [2024-06-10 11:39:07.275048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.339 [2024-06-10 11:39:07.275054] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.339 [2024-06-10 11:39:07.275058] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.339 [2024-06-10 11:39:07.284485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.339 [2024-06-10 11:39:07.294867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.339 [2024-06-10 11:39:07.294896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.339 [2024-06-10 11:39:07.294907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.339 [2024-06-10 11:39:07.294912] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.339 [2024-06-10 11:39:07.294917] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.339 [2024-06-10 11:39:07.304400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.339 qpair failed and we were unable to recover it. 00:30:38.599 [2024-06-10 11:39:07.314865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.599 [2024-06-10 11:39:07.314896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.599 [2024-06-10 11:39:07.314909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.599 [2024-06-10 11:39:07.314914] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.599 [2024-06-10 11:39:07.314918] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.324535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.336271] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.336297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.336307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.336312] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.336316] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.344655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.355309] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.355337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.355346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.355351] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.355355] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.364616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.375177] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.375202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.375212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.375216] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.375220] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.384677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.395377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.395401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.395411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.395415] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.395419] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.404562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.415294] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.415320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.415329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.415334] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.415338] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.424845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.435716] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.435748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.435774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.435780] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.435784] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.445176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.455643] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.455677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.455687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.455692] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.455696] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.464918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.475477] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.475505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.475514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.475519] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.475524] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.485066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.495314] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.495342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.495355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.495359] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.495365] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.505121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.515849] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.515881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.515900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.515906] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.515910] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.525241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.535835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.535874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.535884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.535889] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.535893] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.545312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.600 [2024-06-10 11:39:07.555550] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.600 [2024-06-10 11:39:07.555578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.600 [2024-06-10 11:39:07.555589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.600 [2024-06-10 11:39:07.555593] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.600 [2024-06-10 11:39:07.555598] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.600 [2024-06-10 11:39:07.565066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.600 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.574999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.575028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.575038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.575042] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.575050] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.585351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.595950] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.595980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.595990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.595995] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.595999] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.605293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.615981] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.616015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.616025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.616029] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.616033] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.625290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.636005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.636037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.636047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.636051] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.636056] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.645428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.655756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.655786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.655796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.655800] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.655805] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.665664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.675817] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.675850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.675860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.675864] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.675868] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.685449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.696157] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.696185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.696195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.696199] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.696204] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.705614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.716392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.716425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.716434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.716439] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.716444] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.725613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.736133] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.736160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.736169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.736174] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.736178] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.746033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.756595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.756625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.756636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.756641] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.756645] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.766050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.776796] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.862 [2024-06-10 11:39:07.776829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.862 [2024-06-10 11:39:07.776838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.862 [2024-06-10 11:39:07.776843] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.862 [2024-06-10 11:39:07.776847] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.862 [2024-06-10 11:39:07.786014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-06-10 11:39:07.796674] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.863 [2024-06-10 11:39:07.796705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.863 [2024-06-10 11:39:07.796714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.863 [2024-06-10 11:39:07.796719] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.863 [2024-06-10 11:39:07.796723] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.863 [2024-06-10 11:39:07.806145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-06-10 11:39:07.816423] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.863 [2024-06-10 11:39:07.816454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.863 [2024-06-10 11:39:07.816463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.863 [2024-06-10 11:39:07.816468] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.863 [2024-06-10 11:39:07.816472] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:38.863 [2024-06-10 11:39:07.826315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.863 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.836848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.836874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.836884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.836888] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.836893] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:07.845976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.857039] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.857070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.857080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.857084] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.857089] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:07.866313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.876988] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.877019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.877028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.877032] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.877037] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:07.886288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.896723] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.896751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.896776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.896782] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.896787] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:07.906235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.917372] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.917406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.917425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.917431] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.917436] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:07.926589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.936975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.937000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.937014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.937019] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.937023] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:07.946452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.957064] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.957095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.957105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.957109] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.957114] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:07.966537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.976979] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.977007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.977017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.977021] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.977026] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:07.986510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:07.997450] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:07.997488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:07.997508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:07.997513] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:07.997518] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:08.006649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:08.017498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:08.017526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:08.017537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:08.017542] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:08.017549] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:08.026592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:08.036767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:08.036800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.124 [2024-06-10 11:39:08.036810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.124 [2024-06-10 11:39:08.036815] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.124 [2024-06-10 11:39:08.036819] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.124 [2024-06-10 11:39:08.046940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.124 qpair failed and we were unable to recover it. 00:30:39.124 [2024-06-10 11:39:08.057048] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.124 [2024-06-10 11:39:08.057075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.125 [2024-06-10 11:39:08.057085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.125 [2024-06-10 11:39:08.057089] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.125 [2024-06-10 11:39:08.057094] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.125 [2024-06-10 11:39:08.066853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.125 qpair failed and we were unable to recover it. 00:30:39.125 [2024-06-10 11:39:08.077574] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.125 [2024-06-10 11:39:08.077605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.125 [2024-06-10 11:39:08.077616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.125 [2024-06-10 11:39:08.077620] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.125 [2024-06-10 11:39:08.077625] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.125 [2024-06-10 11:39:08.086979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.125 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.097565] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.097600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.097609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.097614] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.097618] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.107069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.117895] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.117927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.117946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.117952] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.117957] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.126888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.137282] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.137309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.137319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.137324] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.137328] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.147198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.157688] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.157716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.157726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.157731] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.157736] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.167083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.177783] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.177817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.177826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.177831] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.177835] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.186979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.197921] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.197947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.197959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.197964] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.197968] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.207072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.217502] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.217533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.217553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.217558] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.217563] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.227328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.237932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.237966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.237977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.237982] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.237986] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.247273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.257970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.257997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.258007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.258012] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.258016] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.267417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.278160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.278187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.278197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.278201] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.278206] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.287524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.297837] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.386 [2024-06-10 11:39:08.297866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.386 [2024-06-10 11:39:08.297876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.386 [2024-06-10 11:39:08.297881] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.386 [2024-06-10 11:39:08.297885] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.386 [2024-06-10 11:39:08.307512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.386 qpair failed and we were unable to recover it. 00:30:39.386 [2024-06-10 11:39:08.318018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.387 [2024-06-10 11:39:08.318049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.387 [2024-06-10 11:39:08.318059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.387 [2024-06-10 11:39:08.318064] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.387 [2024-06-10 11:39:08.318068] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.387 [2024-06-10 11:39:08.327315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.387 qpair failed and we were unable to recover it. 00:30:39.387 [2024-06-10 11:39:08.338341] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.387 [2024-06-10 11:39:08.338372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.387 [2024-06-10 11:39:08.338381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.387 [2024-06-10 11:39:08.338386] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.387 [2024-06-10 11:39:08.338390] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.387 [2024-06-10 11:39:08.347557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.387 qpair failed and we were unable to recover it. 00:30:39.647 [2024-06-10 11:39:08.358313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.647 [2024-06-10 11:39:08.358346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.647 [2024-06-10 11:39:08.358356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.647 [2024-06-10 11:39:08.358361] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.647 [2024-06-10 11:39:08.358365] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:39.647 [2024-06-10 11:39:08.367711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.647 qpair failed and we were unable to recover it. 00:30:40.586 Read completed with error (sct=0, sc=8) 00:30:40.586 starting I/O failed 00:30:40.586 Write completed with error (sct=0, sc=8) 00:30:40.586 starting I/O failed 00:30:40.586 Write completed with error (sct=0, sc=8) 00:30:40.586 starting I/O failed 00:30:40.586 Write completed with error (sct=0, sc=8) 00:30:40.586 starting I/O failed 00:30:40.586 Write completed with error (sct=0, sc=8) 00:30:40.586 starting I/O failed 00:30:40.586 Write completed with error (sct=0, sc=8) 00:30:40.586 starting I/O failed 00:30:40.586 Read completed with error (sct=0, sc=8) 00:30:40.586 starting I/O failed 00:30:40.586 Read completed with error (sct=0, sc=8) 00:30:40.586 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Read completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 Write completed with error (sct=0, sc=8) 00:30:40.587 starting I/O failed 00:30:40.587 [2024-06-10 11:39:09.373738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.587 [2024-06-10 11:39:09.380741] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.587 [2024-06-10 11:39:09.380785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.587 [2024-06-10 11:39:09.380803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.587 [2024-06-10 11:39:09.380811] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.587 [2024-06-10 11:39:09.380818] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002c1780 00:30:40.587 [2024-06-10 11:39:09.390505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.587 qpair failed and we were unable to recover it. 00:30:40.587 [2024-06-10 11:39:09.401364] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.587 [2024-06-10 11:39:09.401394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.587 [2024-06-10 11:39:09.401408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.587 [2024-06-10 11:39:09.401415] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.587 [2024-06-10 11:39:09.401421] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002c1780 00:30:40.587 [2024-06-10 11:39:09.410484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.587 qpair failed and we were unable to recover it. 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Write completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 Read completed with error (sct=0, sc=8) 00:30:41.527 starting I/O failed 00:30:41.527 [2024-06-10 11:39:10.416214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.527 [2024-06-10 11:39:10.424089] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.527 [2024-06-10 11:39:10.424128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.527 [2024-06-10 11:39:10.424146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.527 [2024-06-10 11:39:10.424154] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.527 [2024-06-10 11:39:10.424161] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:30:41.527 [2024-06-10 11:39:10.433431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.527 qpair failed and we were unable to recover it. 00:30:41.527 [2024-06-10 11:39:10.444292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.527 [2024-06-10 11:39:10.444321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.527 [2024-06-10 11:39:10.444336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.527 [2024-06-10 11:39:10.444342] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.527 [2024-06-10 11:39:10.444349] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:30:41.527 [2024-06-10 11:39:10.453826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.527 qpair failed and we were unable to recover it. 00:30:41.527 [2024-06-10 11:39:10.454008] nvme_ctrlr.c:4395:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:41.527 A controller has encountered a failure and is being reset. 00:30:41.527 [2024-06-10 11:39:10.454128] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:41.527 [2024-06-10 11:39:10.456692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:41.527 Controller properly reset. 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Read completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Read completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Read completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Read completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Read completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Read completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Write completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.912 Read completed with error (sct=0, sc=8) 00:30:42.912 starting I/O failed 00:30:42.913 Read completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Read completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Read completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Read completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Read completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 Write completed with error (sct=0, sc=8) 00:30:42.913 starting I/O failed 00:30:42.913 [2024-06-10 11:39:11.470773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.913 Initializing NVMe Controllers 00:30:42.913 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.913 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:42.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:42.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:42.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:42.913 Initialization complete. Launching workers. 00:30:42.913 Starting thread on core 1 00:30:42.913 Starting thread on core 2 00:30:42.913 Starting thread on core 3 00:30:42.913 Starting thread on core 0 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:42.913 00:30:42.913 real 0m14.580s 00:30:42.913 user 0m29.330s 00:30:42.913 sys 0m2.512s 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:42.913 ************************************ 00:30:42.913 END TEST nvmf_target_disconnect_tc2 00:30:42.913 ************************************ 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:42.913 ************************************ 00:30:42.913 START TEST nvmf_target_disconnect_tc3 00:30:42.913 ************************************ 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc3 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3798220 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:30:42.913 11:39:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:30:42.913 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.827 11:39:13 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3796057 00:30:44.827 11:39:13 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Read completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.209 Write completed with error (sct=0, sc=8) 00:30:46.209 starting I/O failed 00:30:46.210 [2024-06-10 11:39:14.786571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3796057 Killed "${NVMF_APP[@]}" "$@" 00:30:46.780 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:30:46.780 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:46.780 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:46.780 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:46.780 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:46.780 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3799095 00:30:46.781 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3799095 00:30:46.781 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:46.781 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@830 -- # '[' -z 3799095 ']' 00:30:46.781 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.781 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:46.781 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.781 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:46.781 11:39:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:46.781 [2024-06-10 11:39:15.667256] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:46.781 [2024-06-10 11:39:15.667311] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.781 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.781 [2024-06-10 11:39:15.745343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Read completed with error (sct=0, sc=8) 00:30:47.041 starting I/O failed 00:30:47.041 Write completed with error (sct=0, sc=8) 00:30:47.042 starting I/O failed 00:30:47.042 Write completed with error (sct=0, sc=8) 00:30:47.042 starting I/O failed 00:30:47.042 Write completed with error (sct=0, sc=8) 00:30:47.042 starting I/O failed 00:30:47.042 Write completed with error (sct=0, sc=8) 00:30:47.042 starting I/O failed 00:30:47.042 Read completed with error (sct=0, sc=8) 00:30:47.042 starting I/O failed 00:30:47.042 Read completed with error (sct=0, sc=8) 00:30:47.042 starting I/O failed 00:30:47.042 Read completed with error (sct=0, sc=8) 00:30:47.042 starting I/O failed 00:30:47.042 [2024-06-10 11:39:15.792129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.042 [2024-06-10 11:39:15.794753] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:47.042 [2024-06-10 11:39:15.794777] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:47.042 [2024-06-10 11:39:15.794784] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:47.042 [2024-06-10 11:39:15.800508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.042 [2024-06-10 11:39:15.800535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.042 [2024-06-10 11:39:15.800541] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:47.042 [2024-06-10 11:39:15.800545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:47.042 [2024-06-10 11:39:15.800549] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.042 [2024-06-10 11:39:15.800692] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:30:47.042 [2024-06-10 11:39:15.800812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:30:47.042 [2024-06-10 11:39:15.800934] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:30:47.042 [2024-06-10 11:39:15.800936] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@863 -- # return 0 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.612 Malloc0 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:47.612 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.612 [2024-06-10 11:39:16.527919] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1336450/0x1341f60) succeed. 00:30:47.612 [2024-06-10 11:39:16.539284] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1337a90/0x13835f0) succeed. 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 [2024-06-10 11:39:16.671510] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:47.873 11:39:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3798220 00:30:47.873 [2024-06-10 11:39:16.799250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.873 qpair failed and we were unable to recover it. 00:30:47.873 [2024-06-10 11:39:16.801650] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:47.873 [2024-06-10 11:39:16.801664] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:47.873 [2024-06-10 11:39:16.801671] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:49.256 [2024-06-10 11:39:17.806161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.256 qpair failed and we were unable to recover it. 00:30:49.256 [2024-06-10 11:39:17.808630] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:49.256 [2024-06-10 11:39:17.808644] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:49.256 [2024-06-10 11:39:17.808650] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:50.196 [2024-06-10 11:39:18.813020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-06-10 11:39:18.815654] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:50.196 [2024-06-10 11:39:18.815667] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:50.196 [2024-06-10 11:39:18.815673] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:51.136 [2024-06-10 11:39:19.820011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.137 qpair failed and we were unable to recover it. 00:30:51.137 [2024-06-10 11:39:19.822153] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:51.137 [2024-06-10 11:39:19.822167] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:51.137 [2024-06-10 11:39:19.822172] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:52.079 [2024-06-10 11:39:20.826666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.079 qpair failed and we were unable to recover it. 00:30:52.079 [2024-06-10 11:39:20.829097] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:52.079 [2024-06-10 11:39:20.829111] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:52.079 [2024-06-10 11:39:20.829117] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:53.020 [2024-06-10 11:39:21.833143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.020 qpair failed and we were unable to recover it. 00:30:53.020 [2024-06-10 11:39:21.835302] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:53.020 [2024-06-10 11:39:21.835315] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:53.020 [2024-06-10 11:39:21.835321] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:30:53.963 [2024-06-10 11:39:22.839272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-06-10 11:39:22.841743] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:53.963 [2024-06-10 11:39:22.841770] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:53.963 [2024-06-10 11:39:22.841776] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:30:54.940 [2024-06-10 11:39:23.846131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:54.940 qpair failed and we were unable to recover it. 00:30:54.940 [2024-06-10 11:39:23.848488] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:54.940 [2024-06-10 11:39:23.848499] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:54.940 [2024-06-10 11:39:23.848504] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:30:55.883 [2024-06-10 11:39:24.852581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:55.883 qpair failed and we were unable to recover it. 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Read completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 Write completed with error (sct=0, sc=8) 00:30:57.270 starting I/O failed 00:30:57.270 [2024-06-10 11:39:25.858397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:57.270 [2024-06-10 11:39:25.860172] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:57.270 [2024-06-10 11:39:25.860188] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:57.270 [2024-06-10 11:39:25.860194] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002ca780 00:30:58.211 [2024-06-10 11:39:26.864520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.211 qpair failed and we were unable to recover it. 00:30:58.211 [2024-06-10 11:39:26.866893] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:58.211 [2024-06-10 11:39:26.866903] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:58.211 [2024-06-10 11:39:26.866907] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002ca780 00:30:59.152 [2024-06-10 11:39:27.871177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.152 qpair failed and we were unable to recover it. 00:30:59.152 [2024-06-10 11:39:27.871323] nvme_ctrlr.c:4395:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:59.152 A controller has encountered a failure and is being reset. 00:30:59.152 Resorting to new failover address 192.168.100.9 00:30:59.152 [2024-06-10 11:39:27.871414] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.152 [2024-06-10 11:39:27.871476] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:59.152 [2024-06-10 11:39:27.874137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:59.152 Controller properly reset. 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Write completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.094 starting I/O failed 00:31:00.094 Read completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Read completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Read completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Read completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Read completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Write completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Write completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Write completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Write completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 Read completed with error (sct=0, sc=8) 00:31:00.095 starting I/O failed 00:31:00.095 [2024-06-10 11:39:28.931449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.095 Initializing NVMe Controllers 00:31:00.095 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.095 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.095 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:00.095 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:00.095 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:00.095 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:00.095 Initialization complete. Launching workers. 00:31:00.095 Starting thread on core 1 00:31:00.095 Starting thread on core 2 00:31:00.095 Starting thread on core 3 00:31:00.095 Starting thread on core 0 00:31:00.095 11:39:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:31:00.095 00:31:00.095 real 0m17.391s 00:31:00.095 user 1m9.898s 00:31:00.095 sys 0m3.722s 00:31:00.095 11:39:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:00.095 11:39:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:00.095 ************************************ 00:31:00.095 END TEST nvmf_target_disconnect_tc3 00:31:00.095 ************************************ 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:00.095 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:00.095 rmmod nvme_rdma 00:31:00.095 rmmod nvme_fabrics 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3799095 ']' 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3799095 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 3799095 ']' 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 3799095 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3799095 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3799095' 00:31:00.355 killing process with pid 3799095 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 3799095 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 3799095 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:00.355 11:39:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:00.616 00:31:00.616 real 0m40.687s 00:31:00.616 user 2m36.044s 00:31:00.616 sys 0m11.887s 00:31:00.616 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:00.616 11:39:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:00.616 ************************************ 00:31:00.616 END TEST nvmf_target_disconnect 00:31:00.616 ************************************ 00:31:00.616 11:39:29 nvmf_rdma -- nvmf/nvmf.sh@125 -- # timing_exit host 00:31:00.616 11:39:29 nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:00.616 11:39:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:00.616 11:39:29 nvmf_rdma -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:00.616 00:31:00.616 real 22m20.566s 00:31:00.616 user 55m59.594s 00:31:00.616 sys 5m21.865s 00:31:00.616 11:39:29 nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:00.616 11:39:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:00.616 ************************************ 00:31:00.616 END TEST nvmf_rdma 00:31:00.616 ************************************ 00:31:00.616 11:39:29 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:31:00.616 11:39:29 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:00.616 11:39:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:00.616 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:31:00.616 ************************************ 00:31:00.616 START TEST spdkcli_nvmf_rdma 00:31:00.616 ************************************ 00:31:00.616 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:31:00.616 * Looking for test storage... 00:31:00.616 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:31:00.616 11:39:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:31:00.616 11:39:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:00.616 11:39:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:31:00.616 11:39:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.616 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:31:00.877 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3801820 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3801820 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@830 -- # '[' -z 3801820 ']' 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:00.878 11:39:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:00.878 [2024-06-10 11:39:29.679353] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:31:00.878 [2024-06-10 11:39:29.679434] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3801820 ] 00:31:00.878 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.878 [2024-06-10 11:39:29.746700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:00.878 [2024-06-10 11:39:29.821670] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.878 [2024-06-10 11:39:29.821672] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@863 -- # return 0 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.820 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:31:01.821 11:39:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:31:08.404 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:31:08.404 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:31:08.404 Found net devices under 0000:98:00.0: mlx_0_0 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:31:08.404 Found net devices under 0000:98:00.1: mlx_0_1 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:08.404 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:08.405 26: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:08.405 link/ether ec:0d:9a:8b:2b:a4 brd ff:ff:ff:ff:ff:ff 00:31:08.405 altname enp152s0f0np0 00:31:08.405 altname ens817f0np0 00:31:08.405 inet 192.168.100.8/24 scope global mlx_0_0 00:31:08.405 valid_lft forever preferred_lft forever 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:08.405 27: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:08.405 link/ether ec:0d:9a:8b:2b:a5 brd ff:ff:ff:ff:ff:ff 00:31:08.405 altname enp152s0f1np1 00:31:08.405 altname ens817f1np1 00:31:08.405 inet 192.168.100.9/24 scope global mlx_0_1 00:31:08.405 valid_lft forever preferred_lft forever 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:08.405 192.168.100.9' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:08.405 192.168.100.9' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:08.405 192.168.100.9' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:08.405 11:39:37 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:08.405 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:08.405 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:08.405 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:08.405 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:08.405 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:08.405 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:08.405 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:31:08.405 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:31:08.405 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:08.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:08.405 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:08.405 ' 00:31:10.946 [2024-06-10 11:39:39.601989] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1361fe0/0x1379c70) succeed. 00:31:10.946 [2024-06-10 11:39:39.618290] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1369770/0x13bb300) succeed. 00:31:12.327 [2024-06-10 11:39:40.956165] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:31:15.031 [2024-06-10 11:39:43.375395] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:31:16.940 [2024-06-10 11:39:45.454015] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:31:18.344 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:18.344 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:18.344 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:18.344 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:18.344 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:18.344 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:18.344 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:18.344 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:31:18.344 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:31:18.344 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:18.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:18.344 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:18.344 11:39:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:18.344 11:39:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:18.344 11:39:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:18.344 11:39:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:18.344 11:39:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:18.344 11:39:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:18.344 11:39:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:31:18.344 11:39:47 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:18.604 11:39:47 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:18.605 11:39:47 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:18.605 11:39:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:18.605 11:39:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:18.605 11:39:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:18.865 11:39:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:18.865 11:39:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:18.865 11:39:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:18.865 11:39:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:18.865 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:18.865 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:18.865 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:18.865 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:31:18.865 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:31:18.865 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:18.865 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:18.865 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:18.865 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:18.865 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:18.865 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:18.865 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:18.865 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:18.865 ' 00:31:24.150 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:24.150 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:24.150 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:24.150 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:24.150 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:31:24.150 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:31:24.150 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:24.150 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:24.150 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:24.150 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:24.150 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:24.150 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:24.150 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:24.150 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3801820 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@949 -- # '[' -z 3801820 ']' 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # kill -0 3801820 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # uname 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3801820 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3801820' 00:31:24.150 killing process with pid 3801820 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # kill 3801820 00:31:24.150 11:39:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # wait 3801820 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:24.410 rmmod nvme_rdma 00:31:24.410 rmmod nvme_fabrics 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:24.410 00:31:24.410 real 0m23.748s 00:31:24.410 user 0m51.859s 00:31:24.410 sys 0m5.825s 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:24.410 11:39:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:24.410 ************************************ 00:31:24.410 END TEST spdkcli_nvmf_rdma 00:31:24.410 ************************************ 00:31:24.410 11:39:53 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:24.410 11:39:53 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:24.410 11:39:53 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:24.410 11:39:53 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:24.410 11:39:53 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:24.410 11:39:53 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:24.410 11:39:53 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:24.410 11:39:53 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:24.410 11:39:53 -- common/autotest_common.sh@10 -- # set +x 00:31:24.410 11:39:53 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:24.410 11:39:53 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:31:24.410 11:39:53 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:31:24.410 11:39:53 -- common/autotest_common.sh@10 -- # set +x 00:31:32.559 INFO: APP EXITING 00:31:32.559 INFO: killing all VMs 00:31:32.559 INFO: killing vhost app 00:31:32.559 INFO: EXIT DONE 00:31:35.106 Waiting for block devices as requested 00:31:35.106 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:35.106 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:35.106 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:35.106 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:35.367 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:35.367 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:35.367 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:35.367 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:35.628 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:35.628 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:35.889 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:35.889 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:35.889 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:35.889 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:36.150 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:36.150 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:36.150 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:39.453 Cleaning 00:31:39.453 Removing: /var/run/dpdk/spdk0/config 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:39.453 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:39.453 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:39.453 Removing: /var/run/dpdk/spdk1/config 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:39.453 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:39.453 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:39.453 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:39.453 Removing: /var/run/dpdk/spdk2/config 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:39.453 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:39.453 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:39.453 Removing: /var/run/dpdk/spdk3/config 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:39.453 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:39.453 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:39.453 Removing: /var/run/dpdk/spdk4/config 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:39.453 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:39.453 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:39.453 Removing: /dev/shm/bdevperf_trace.pid3531815 00:31:39.453 Removing: /dev/shm/bdevperf_trace.pid3703584 00:31:39.453 Removing: /dev/shm/bdev_svc_trace.1 00:31:39.453 Removing: /dev/shm/nvmf_trace.0 00:31:39.453 Removing: /dev/shm/spdk_tgt_trace.pid3403790 00:31:39.453 Removing: /var/run/dpdk/spdk0 00:31:39.453 Removing: /var/run/dpdk/spdk1 00:31:39.453 Removing: /var/run/dpdk/spdk2 00:31:39.453 Removing: /var/run/dpdk/spdk3 00:31:39.453 Removing: /var/run/dpdk/spdk4 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3401803 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3403790 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3404364 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3405413 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3405735 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3406802 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3406990 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3407256 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3412017 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3412724 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3413026 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3413271 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3413657 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3414043 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3414400 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3414677 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3414900 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3416200 00:31:39.453 Removing: /var/run/dpdk/spdk_pid3419462 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3419821 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3420186 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3420483 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3420889 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3420908 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3421326 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3421607 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3421972 00:31:39.714 Removing: /var/run/dpdk/spdk_pid3421988 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3422341 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3422364 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3422924 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3423146 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3423546 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3423912 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3423937 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3424071 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3424355 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3424708 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3425057 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3425383 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3425584 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3425800 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3426145 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3426501 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3426850 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3427039 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3427252 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3427590 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3427937 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3428292 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3428541 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3428732 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3429034 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3429386 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3429739 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3430029 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3430154 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3430564 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3435066 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3486467 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3491304 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3503139 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3509750 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3513958 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3514798 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3531815 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3532171 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3536896 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3543640 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3546718 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3557929 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3587135 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3591298 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3649469 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3668372 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3701241 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3702324 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3703584 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3708295 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3716431 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3717491 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3718496 00:31:39.715 Removing: /var/run/dpdk/spdk_pid3719504 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3719961 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3724953 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3725046 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3729619 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3730286 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3730955 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3731903 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3731956 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3737349 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3738028 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3742944 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3746632 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3753033 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3764144 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3764219 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3787516 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3787833 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3794584 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3795207 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3798220 00:31:39.977 Removing: /var/run/dpdk/spdk_pid3801820 00:31:39.977 Clean 00:31:39.977 11:40:08 -- common/autotest_common.sh@1450 -- # return 0 00:31:39.977 11:40:08 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:39.977 11:40:08 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:39.977 11:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:39.977 11:40:08 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:39.977 11:40:08 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:39.977 11:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:39.977 11:40:08 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:31:39.977 11:40:08 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:31:39.977 11:40:08 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:31:39.977 11:40:08 -- spdk/autotest.sh@391 -- # hash lcov 00:31:39.977 11:40:08 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:40.239 11:40:08 -- spdk/autotest.sh@393 -- # hostname 00:31:40.239 11:40:08 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:31:40.239 geninfo: WARNING: invalid characters removed from testname! 00:32:06.827 11:40:31 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:32:06.827 11:40:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:32:07.107 11:40:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:32:09.022 11:40:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:32:10.408 11:40:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:32:11.867 11:40:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:32:13.252 11:40:42 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:13.252 11:40:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:13.252 11:40:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:13.253 11:40:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.253 11:40:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.253 11:40:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.253 11:40:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.253 11:40:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.253 11:40:42 -- paths/export.sh@5 -- $ export PATH 00:32:13.253 11:40:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.253 11:40:42 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:32:13.253 11:40:42 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:13.253 11:40:42 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718012442.XXXXXX 00:32:13.253 11:40:42 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718012442.ODCiDK 00:32:13.253 11:40:42 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:13.253 11:40:42 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:32:13.253 11:40:42 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:32:13.253 11:40:42 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:13.253 11:40:42 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:13.253 11:40:42 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:13.253 11:40:42 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:13.253 11:40:42 -- common/autotest_common.sh@10 -- $ set +x 00:32:13.253 11:40:42 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:32:13.253 11:40:42 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:13.253 11:40:42 -- pm/common@17 -- $ local monitor 00:32:13.253 11:40:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:13.253 11:40:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:13.253 11:40:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:13.253 11:40:42 -- pm/common@21 -- $ date +%s 00:32:13.253 11:40:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:13.253 11:40:42 -- pm/common@21 -- $ date +%s 00:32:13.253 11:40:42 -- pm/common@25 -- $ sleep 1 00:32:13.253 11:40:42 -- pm/common@21 -- $ date +%s 00:32:13.253 11:40:42 -- pm/common@21 -- $ date +%s 00:32:13.253 11:40:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012442 00:32:13.253 11:40:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012442 00:32:13.253 11:40:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012442 00:32:13.253 11:40:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012442 00:32:13.513 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012442_collect-vmstat.pm.log 00:32:13.513 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012442_collect-cpu-load.pm.log 00:32:13.513 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012442_collect-cpu-temp.pm.log 00:32:13.513 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012442_collect-bmc-pm.bmc.pm.log 00:32:14.455 11:40:43 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:14.455 11:40:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:32:14.455 11:40:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:32:14.455 11:40:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:14.455 11:40:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:14.455 11:40:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:14.455 11:40:43 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:14.455 11:40:43 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:14.455 11:40:43 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:32:14.455 11:40:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:14.455 11:40:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:14.455 11:40:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:14.455 11:40:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:14.455 11:40:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.455 11:40:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:14.456 11:40:43 -- pm/common@44 -- $ pid=3819937 00:32:14.456 11:40:43 -- pm/common@50 -- $ kill -TERM 3819937 00:32:14.456 11:40:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.456 11:40:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:14.456 11:40:43 -- pm/common@44 -- $ pid=3819938 00:32:14.456 11:40:43 -- pm/common@50 -- $ kill -TERM 3819938 00:32:14.456 11:40:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.456 11:40:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:14.456 11:40:43 -- pm/common@44 -- $ pid=3819941 00:32:14.456 11:40:43 -- pm/common@50 -- $ kill -TERM 3819941 00:32:14.456 11:40:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.456 11:40:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:14.456 11:40:43 -- pm/common@44 -- $ pid=3819963 00:32:14.456 11:40:43 -- pm/common@50 -- $ sudo -E kill -TERM 3819963 00:32:14.456 + [[ -n 3282874 ]] 00:32:14.456 + sudo kill 3282874 00:32:14.468 [Pipeline] } 00:32:14.488 [Pipeline] // stage 00:32:14.494 [Pipeline] } 00:32:14.513 [Pipeline] // timeout 00:32:14.518 [Pipeline] } 00:32:14.536 [Pipeline] // catchError 00:32:14.542 [Pipeline] } 00:32:14.560 [Pipeline] // wrap 00:32:14.567 [Pipeline] } 00:32:14.585 [Pipeline] // catchError 00:32:14.596 [Pipeline] stage 00:32:14.599 [Pipeline] { (Epilogue) 00:32:14.615 [Pipeline] catchError 00:32:14.617 [Pipeline] { 00:32:14.632 [Pipeline] echo 00:32:14.634 Cleanup processes 00:32:14.641 [Pipeline] sh 00:32:14.935 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:32:14.935 3820047 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:32:14.935 3820486 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:32:14.950 [Pipeline] sh 00:32:15.238 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:32:15.238 ++ grep -v 'sudo pgrep' 00:32:15.238 ++ awk '{print $1}' 00:32:15.238 + sudo kill -9 3820047 00:32:15.251 [Pipeline] sh 00:32:15.537 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:25.544 [Pipeline] sh 00:32:25.830 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:25.830 Artifacts sizes are good 00:32:25.845 [Pipeline] archiveArtifacts 00:32:25.852 Archiving artifacts 00:32:26.033 [Pipeline] sh 00:32:26.318 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:32:26.333 [Pipeline] cleanWs 00:32:26.343 [WS-CLEANUP] Deleting project workspace... 00:32:26.343 [WS-CLEANUP] Deferred wipeout is used... 00:32:26.350 [WS-CLEANUP] done 00:32:26.352 [Pipeline] } 00:32:26.373 [Pipeline] // catchError 00:32:26.386 [Pipeline] sh 00:32:26.673 + logger -p user.info -t JENKINS-CI 00:32:26.683 [Pipeline] } 00:32:26.699 [Pipeline] // stage 00:32:26.705 [Pipeline] } 00:32:26.723 [Pipeline] // node 00:32:26.728 [Pipeline] End of Pipeline 00:32:26.760 Finished: SUCCESS